首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
Wormhole networks have traditionally used deadlock avoidance strategies. More recently, deadlock recovery strategies have begun to gain acceptance. In particular, progressive deadlock recovery techniques allocate a few dedicated resources to quickly deliver deadlocked packets. Deadlock recovery is based on the assumption that deadlocks are rare; otherwise, recovery techniques are not efficient. Measurements of deadlock occurrence frequency show that deadlocks are highly unlikely when enough routing freedom is provided. However, networks are more prone to deadlocks when the network is close to or beyond saturation, causing some network performance degradation. Similar performance degradation behavior at saturation was also observed in networks using deadlock avoidance strategies. In this paper, we take a different approach to handling deadlocks and performance degradation. We propose the use of an injection limitation mechanism that prevents performance degradation near the saturation point and, at the same time, reduces the probability of deadlock to negligible values. We also propose an improved deadlock detection mechanism that uses only local information, detects all deadlocks, and considerably reduces the probability of false deadlock detection over previous proposals. In the rare case when impending deadlock is detected, our proposal consists of using a simple recovery technique that absorbs the deadlocked message at the current node and later reinjects it for continued routing toward its destination. Performance evaluation results show that our new approach to handling deadlock is more efficient than previously proposed techniques  相似文献   

2.
Requirements traceability offers many benefits to software projects, and it has been identified as critical for successful development. However, numerous challenges exist in the implementation of traceability in the software engineering industry. Some of these challenges can be overcome through organizational policy and procedure changes, but the lack of cost-effective traceability models and tools remains an open problem. A novel, cost-effective solution for the traceability tool problem is proposed, prototyped, and tested in a case study using an actual software project. Metrics from the case study are presented to demonstrate the viability of the proposed solution for the traceability tool problem. The results show that the proposed method offers significant advantages over implementing traceability manually or using existing commercial traceability approaches.  相似文献   

3.
ContextObject-oriented software undergoes continuous changes—changes often made without consideration of the software’s overall structure and design rationale. Hence, over time, the design quality of the software degrades causing software aging or software decay. Refactoring offers a means of restructuring software design to improve maintainability. In practice, efforts to invest in refactoring are restricted; therefore, the problem calls for a method for identifying cost-effective refactorings that efficiently improve maintainability. Cost-effectiveness of applied refactorings can be explained as maintainability improvement over invested refactoring effort (cost). For the system, the more cost-effective refactorings are applied, the greater maintainability would be improved. There have been several studies of supporting the arguments that changes are more prone to occur in the pieces of codes more frequently utilized by users; hence, applying refactorings in these parts would fast improve maintainability of software. For this reason, dynamic information is needed for identifying the entities involved in given scenarios/functions of a system, and within these entities, refactoring candidates need to be extracted.ObjectiveThis paper provides an automated approach to identifying cost-effective refactorings using dynamic information in object-oriented software.MethodTo perform cost-effective refactoring, refactoring candidates are extracted in a way that reduces dependencies; these are referred to as the dynamic information. The dynamic profiling technique is used to obtain the dependencies of entities based on dynamic method calls. Based on those dynamic dependencies, refactoring-candidate extraction rules are defined, and a maintainability evaluation function is established. Then, refactoring candidates are extracted and assessed using the defined rules and the evaluation function, respectively. The best refactoring (i.e., that which most improves maintainability) is selected from among refactoring candidates, then refactoring candidate extraction and assessment are re-performed to select the next refactoring, and the refactoring identification process is iterated until no more refactoring candidates for improving maintainability are found.ResultsWe evaluate our proposed approach in three open-source projects. The first results show that dynamic information is helpful in identifying cost-effective refactorings that fast improve maintainability; and, considering dynamic information in addition to static information provides even more opportunities to identify cost-effective refactorings. The second results show that dynamic information is helpful in extracting refactoring candidates in the classes where real changes had occurred; in addition, the results also offer the promising support for the contention that using dynamic information helps to extracting refactoring candidates from highly-ranked frequently changed classes.ConclusionOur proposed approach helps to identify cost-effective refactorings and supports an automated refactoring identification process.  相似文献   

4.
5.
Porotocol Interoperability testing is an important means to ensure the interconnection and interoperation between protocol products.In this paper,we proposed a formal approach to protocol interoperability testing based on the operational semantics of Concurrent TTCN.We define Concurrent TTCN‘s operational semantics by using Labeled Transition System,and describe the interoperability test execution and test verdict based on Concurrent TTCN.This approach is very helpful for the formation of formal interoperability testing theory and construction of general interoperability testing system.  相似文献   

6.
A practical approach to testing GUI systems   总被引:1,自引:0,他引:1  
GUI systems are becoming increasingly popular thanks to their ease of use when compared against traditional systems. However, GUI systems are often challenging to test due to their complexity and special features. Traditional testing methodologies are not designed to deal with the complexity of GUI systems; using these methodologies can result in increased time and expense. In our proposed strategy, a GUI system will be divided into two abstract tiers—the component tier and the system tier. On the component tier, a flow graph will be created for each GUI component. Each flow graph represents a set of relationships between the pre-conditions, event sequences and post-conditions for the corresponding component. On the system tier, the components are integrated to build up a viewpoint of the entire system. Tests on the system tier will interrogate the interactions between the components. This method for GUI testing is simple and practical; we will show the effectiveness of this approach by performing two empirical experiments and describing the results found.
James MillerEmail:

Ping Li   received her M.Sc. in Computer Engineering from the University of Alberta, Canada, in 2004. She is currently working for Waterloo Hydrogeologic Inc., a Schlumberger Company, as a Software Quality Analyst. Toan Huynh   received a B.Sc. in Computer Engineering from the University of Alberta, Canada. He is currently a PhD candidate at the same institution. His research interests include: web systems, e-commerce, software testing, vulnerabilities and defect management, and software approaches to the production of secure systems. Marek Reformat   received his M.Sc. degree from Technical University of Poznan, Poland, and his Ph.D. from University of Manitoba, Canada. His interests were related to simulation and modeling in time-domain, as well as evolutionary computing and its application to optimization problems. For three years he worked for the Manitoba HVDC Research Centre, Canada, where he was a member of a simulation software development team. Currently, Marek Reformat is with the Department of Electrical and Computer Engineering at University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, as well as probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. Dr. Reformat has been a member of program committees of several conferences related to Computational Intelligence and evolutionary computing. He is a member of the IEEE Computer Society and ACM. James Miller   received the B.Sc. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship, and subsequently a senior lectureship in Computer Science. Initially during this period his research interests were in Computer Vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests have been in Software and Systems Engineering. In 2000, he joined the Department of Electrical and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate software verification and validation issues across various domains, including embedded, web-based and ubiquitous environments. He has published over one hundred refereed journal and conference papers on Software and Systems Engineering (see www.steam.ualberta.ca for details on recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering.   相似文献   

7.
This paper proposes a formal approach to protocol performance testing based on the extended concurrent TTCN,To meet the needs of protocol performance testing,concurrent TTCN is extended,and the extended concurrent TTCN‘s operational semantics is defined in terms of Input-Output Labeled Transition System.An architecture design of protocol performance test system is described,and an example of test cases and its test result are given.  相似文献   

8.
The paper describes the use of formal development methods on an industrial safety-critical application. The Z notation was used for documenting the system specification and part of the design, and the SPARK subset of Ada was used for coding. However, perhaps the most distinctive nature of the project lies in the amount of proof that was carried out: proofs were carried out both at the Z level (approximately 150 proofs in 500 pages) and at the SPARK code level (approximately 9000 verification conditions generated and discharged). The project was carried out under UK Interim Defence Standards 00-55 and 00-56, which require the use of formal methods on safety-critical applications. It is believed to be the first to be completed against the rigorous demands of the 1991 version of these standards. The paper includes comparisons of proof with the various types of testing employed, in terms of their efficiency at finding faults. The most striking result is that the Z proof appears to be substantially more efficient at finding faults than the most efficient testing phase. Given the importance of early fault detection, we believe this helps to show the significant benefit and practicality of large-scale proof on projects of this kind  相似文献   

9.
There have been numerous advancements and rising competition in semiconductor technologies. In light of this, the wafer test plays a more significant role in providing the most prompt yield feedback for quick process improvement. However, the wafer test shop floor is getting more complicated than ever before because of the increasing change-over rate, nonlinear wafer arrival, and preemption by urgent orders. Furthermore, the foundry wafer test is a heterogeneous production with different production cycle times and a large variety of nonidentical testers. Shop floor conditions, including work in process (WIP) pool, tester status, and work order priority, continuously change. There is a need to operate the kind of production line that simultaneously fulfills multiple objectives. Such objectives are maximum confirmed line item performance (CLIP) for normal lots, 100% CLIP for urgent lots, minimum change-over rate, and shortest cycle time. Thus, a reactive dispatching approach is proposed and expected to perform a real-time solution no matter how/what the shop floor would change. The dynamic approach is mainly triggered by two kinds of major events: one is when an urgent lot comes in, and the other is when a tester is idle. In addition, through a two-phase dispatching algorithm, lot ranking, and lot assignment methods, prioritized WIP lots and an appropriate lot assignment are suggested. A better performance measure is obtained by considering the multiple objectives the wafer test operations seek to achieve.  相似文献   

10.
An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e.g., the protocol specification) with the reference specification (e.g., the communication service specification). Nondeterminism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of nondeterminacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a transport protocol and a distributed mutual exclusion algorithm is described.  相似文献   

11.
12.

On a cloud platform, the user requests are managed through workload units called cloudlets which are assigned to virtual machines through cloudlet scheduling mechanism that mainly aims at minimizing the request processing time by producing effective small length schedules. The efficient request processing, however, requires excessive utilization of high-performance resources which incurs large overhead in terms of monetary cost and energy consumed by physical machines, thereby rendering cloud platforms inadequate for cost-effective green computing environments. This paper proposes a power-aware cloudlet scheduling (PACS) algorithm for mapping cloudlets to virtual machines. The algorithm aims at reducing the request processing time through small length schedules while minimizing energy consumption and the cost incurred. For allocation of virtual machines to cloudlets, the algorithm iteratively arranges virtual machines (VMs) in groups using weights computed through optimization and rescaling of parameters including VM resources, cost of utilization of resources, and power consumption. The experiments performed with a diverse set of configurations of cloudlets and virtual machines show that the PACS algorithm achieves a significant overall performance improvement factor ranging from 3.80 to 23.82 over other well-known cloudlet scheduling algorithms..

  相似文献   

13.
Unique Input–Output sequences (UIOs) are quite commonly used in conformance testing. Unfortunately finding UIOs of minimal length is an NP hard problem. This study presents a hybrid approach to generate UIOs automatically on a basis of the finite state machine (FSM) specification. The proposed hybrid approach harnesses the benefits of hill climbing (Greedy search) and heuristic algorithm. Hill climbing, which exploits domain knowledge, is capable of quickly generating good result however it may get stuck in local minimum. To overcome the problem we used a set of parameters called the seed, which allows the algorithm to generate different results for a different seed. The hill climbing generates solutions implied by the seed while the Genetic Algorithm is used as the seed generator. We compared the hybrid approach with Genetic Algorithm, Simulated Annealing, Greedy Algorithm, and Random Search. The experimental evaluation shows that the proposed hybrid approach outperforms other methods. More specifically, we showed that Genetic Algorithm and Simulated Annealing exhibit similar performance while both of them outperform Greedy Algorithm. Finally, we generalize the proposed hybrid approach to seed-driven hybrid architectures and elaborate on how it can be adopted to a broad range of optimization problems.  相似文献   

14.
Software security issues have been a major concern in the cyberspace community, so a great deal of research on security testing has been performed, and various security testing techniques have been developed. Threat modeling provides a systematic way to identify threats that might compromise security, and it has been a well‐accepted practice by the industry, but test case generation from threat models has not been addressed yet. Thus, in this paper, we propose a threat model‐based security testing approach that automatically generates security test sequences from threat trees and transforms them into executable tests. The security testing approach we consider consists of three activities in large: building threat models with threat trees; generating security test sequences from threat trees; and creating executable test cases by considering valid and invalid inputs. To support our approach, we implemented security test generation techniques, and we also conducted an empirical study to assess the effectiveness of our approach. The results of our study show that our threat tree‐based approach is effective in exposing vulnerabilities. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
A state-based approach to integration testing based on UML models   总被引:3,自引:0,他引:3  
Correct functioning of object-oriented software depends upon the successful integration of classes. While individual classes may function correctly, several new faults can arise when these classes are integrated together. In this paper, we present a technique to enhance testing of interactions among modal classes. The technique combines UML collaboration diagrams and statecharts to automatically generate an intermediate test model, called SCOTEM (State COllaboration TEst Model). The SCOTEM is then used to generate valid test paths. We also define various coverage criteria to generate test paths from the SCOTEM model. In order to assess our technique, we have developed a tool and applied it to a case study to investigate its fault detection capability. The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria.  相似文献   

16.
An approach is presented that uses the following techniques: automatic test case generation, self-checking test cases, black box test cases, random test cases, sampling, a form of exhaustive testing, correctness measurements, and the correction of defects in the test cases instead of in the product (defect circumvention). The techniques are cost-effective and have been applied to very large products  相似文献   

17.
The testing phase of the software development process consumes about one-half of the development time and resources. This paper addresses the automation of the analysis stage of testing. Dual programming is introduced as one approach to implement this automation. It uses a higher level language to duplicate the functionality of the software under test. We contend that a higher level language (HLL) uses fewer lines of code than a lower level language (LLL) to achieve the same functionality, so testing the HLL program will require less effort than testing the LLL equivalent. The HLL program becomes the oracle for the LLL version. This paper describes experiments carried out using different categories of applications, and it identifies those most likely to profit from this approach. A metric is used to quantify savings realized. The results of the research are: (a) that dual programming can be used to automate the analysis stage of software testing; (b) that substantial savings of the cost of this testing phase can be realized when the appropriate pairing of primal and dual languages is made, and (c) that it is now possible to build a totally automated testing system. Recommendations are made regarding the applicability of the method to specific classes of applications.  相似文献   

18.
To ensure the reliability and performance of a new system, it must be verified or validated in some manner. Currently, testing is the only reasonable technique available for doing this. Part of this testing process is the high-level system test. This paper considers system testing with respect to operating systems and, in particular, UNIX. This consideration results in the development and presentation of a good approach to the development and performance of the system test. The approach includes derivations from the system specifications and ideas for management of the system testing project. Results of applying the approach to the IBM System/9000 XENIX operating system test and the development of a UNIX test suite are presented.  相似文献   

19.
A formal framework for the analysis of execution traces collected from distributed systems at run-time is presented. We introduce the notions of event and message traces to capture the consistency of causal dependencies between the elements of a trace. We formulate an approach to property testing where a partially ordered execution trace is modeled by a collection of communicating automata. We prove that the model exactly characterizes the causality relation between the events/messages in the observed trace and discuss the implementation of this approach in SDL, where ObjectGEODE is used to verify properties using model-checking techniques. Finally, we illustrate the approach with industrial case studies. Received May 2004, Revised February 2005, Accepted April 2005 by J. Derrick, M. Harman and R. M. Herons  相似文献   

20.
A partition testing strategy consists of two components: a partitioning scheme which determines the way in which the program's input domain is partitioned into subdomains; and an allocation of test cases which determines the exact number of test cases selected from each subdomain. This paper investigates the problem of determining the test allocation when a particular partitioning scheme has been chosen. We show that this problem can be formulated as a classic problem of decision-making under uncertainty, and analyze several well known criteria to resolve this kind of problem. We present algorithms that solve the test allocation problem based on these criteria, and evaluate these criteria by means of a simulation experiment. We also discuss the applicability and implications of applying these criteria in the context of partition testing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号