首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Introduces a systematic and defined process called “comparison of design methodologies” (CDM) for objectively comparing software design methodologies (SDMs). We believe that using CDM will lead to detailed, traceable, and objective comparisons. CDM uses process modeling techniques to model SDMs, classify their components, and analyze their procedural aspects. Modeling the SDMs entails decomposing their methods into components and analyzing the structure and functioning of the components. The classification of the components illustrates which components address similar design issues and/or have similar structures. Similar components then may be further modeled to aid in more precisely understanding their similarities and differences. The models of the SDMs are also used as the bases for conjectures and analyses about the differences between the SDMs. This paper describes three experiments that we carried out in evaluating CDM. The first uses CDM to compare Jackson System Development (JSD) and Booch's (1986) object-oriented design. The second uses CDM to compare two other pairs of SDMs. The last one compares some of our comparisons with other comparisons done in the past using different approaches. The results of these experiments demonstrate that process modeling is valuable as a powerful tool in analysis of software development approaches  相似文献   

2.
3.
(Semi-)automated diagnosis of software faults can drastically increase debugging efficiency, improving reliability and time-to-market. Current automatic diagnosis techniques are predominantly of a statistical nature and, despite typical defect densities, do not explicitly consider multiple faults, as also demonstrated by the popularity of the single-fault benchmark set of programs. We present a reasoning approach, called Zoltar-M(ultiple fault), that yields multiple-fault diagnoses, ranked in order of their probability. Although application of Zoltar-M to programs with many faults requires heuristics (trading-off completeness) to reduce the inherent computational complexity, theory as well as experiments on synthetic program models and multiple-fault program versions available from the software infrastructure repository (SIR) show that for multiple-fault programs this approach can outperform statistical techniques, notably spectrum-based fault localization (SFL). As a side-effect of this research, we present a new SFL variant, called Zoltar-S(ingle fault), that is optimal for single-fault programs, outperforming all other variants known to date.  相似文献   

4.
Object-oriented metrics aim to exhibit the quality of source code and give insight to it quantitatively. Each metric assesses the code from a different aspect. There is a relationship between the quality level and the risk level of source code. The objective of this paper is to empirically examine whether or not there are effective threshold values for source code metrics. It is targeted to derive generalized thresholds that can be used in different software systems. The relationship between metric thresholds and fault-proneness was investigated empirically in this study by using ten open-source software systems. Three types of fault-proneness were defined for the software modules: non-fault-prone, more-than-one-fault-prone, and more-than-three-fault-prone. Two independent case studies were carried out to derive two different threshold values. A single set was created by merging ten datasets and was used as training data by the model. The learner model was created using logistic regression and the Bender method. Results revealed that some metrics have threshold effects. Seven metrics gave satisfactory results in the first case study. In the second case study, eleven metrics gave satisfactory results. This study makes contributions primarily for use by software developers and testers. Software developers can see classes or modules that require revising; this, consequently, contributes to an increment in quality for these modules and a decrement in their risk level. Testers can identify modules that need more testing effort and can prioritize modules according to their risk levels.  相似文献   

5.
6.
Temporal logics are commonly used for reasoning about concurrent systems. Model checkers and other finite-state verification techniques allow for automated checking of system model compliance to given temporal properties. These properties are typically specified as linear-time formulae in temporal logics. Unfortunately, the level of inherent sophistication required by these formalisms too often represents an impediment to move these techniques from “research theory” to “industry practice”. The objective of this work is to facilitate the nontrivial and error prone task of specifying, correctly and without expertise in temporal logic, temporal properties. In order to understand the basis of a simple but expressive formalism for specifying temporal properties we critically analyze commonly used in practice visual notations. Then we present a scenario-based visual language called Property Sequence Chart (PSC) that, in our opinion, fixes the highlighted lacks of these notations by extending a subset of UML 2.0 Interaction Sequence Diagrams. We also provide PSC with both denotational and operational semantics. The operational semantics is obtained via translation into Büchi automata and the translation algorithm is implemented as a plugin of our Charmy tool. Expressiveness of PSC has been validated with respect to well known property specification patterns. Preliminary results appeared in (Autili et al. 2006a).  相似文献   

7.
Constrained multibody system dynamics an automated approach   总被引:1,自引:0,他引:1  
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. Specifically, the “closed loop” problem of multibody chain systems is addressed.

The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. This modification, which is based upon a solution of the constraint equations obtained through a “zero eigenvalues theorem,” is, in effect, a contraction of the dynamical equations.

It is observed that, for a system with n generalized coordinates and m constraint equations, the coefficients in the constraint equations may be viewed as “constraint vectors” in n-dimensional space. Then, in this setting the system itself is free to move in the nm directions which are “orthogonal” to the constraint vectors.  相似文献   


8.
Software fault prediction using different techniques has been done by various researchers previously. It is observed that the performance of these techniques varied from dataset to dataset, which make them inconsistent for fault prediction in the unknown software project. On the other hand, use of ensemble method for software fault prediction can be very effective, as it takes the advantage of different techniques for the given dataset to come up with better prediction results compared to individual technique. Many works are available on binary class software fault prediction (faulty or non-faulty prediction) using ensemble methods, but the use of ensemble methods for the prediction of number of faults has not been explored so far. The objective of this work is to present a system using the ensemble of various learning techniques for predicting the number of faults in given software modules. We present a heterogeneous ensemble method for the prediction of number of faults and use a linear combination rule and a non-linear combination rule based approaches for the ensemble. The study is designed and conducted for different software fault datasets accumulated from the publicly available data repositories. The results indicate that the presented system predicted number of faults with higher accuracy. The results are consistent across all the datasets. We also use prediction at level l (Pred(l)), and measure of completeness to evaluate the results. Pred(l) shows the number of modules in a dataset for which average relative error value is less than or equal to a threshold value l. The results of prediction at level l analysis and measure of completeness analysis have also confirmed the effectiveness of the presented system for the prediction of number of faults. Compared to the single fault prediction technique, ensemble methods produced improved performance for the prediction of number of software faults. Main impact of this work is to allow better utilization of testing resources helping in early and quick identification of most of the faults in the software system.  相似文献   

9.
As the architecture of modern software systems continues to evolve in a distributed fashion, the development of such systems becomes increasingly complex, which requires the integration of more sophisticated specification techniques, tools, and procedures into the conventional methodology. An essential capability of an integrated software development environment is a formal specification method to capture effectively the system's functional requirements as well as its performance requirements. A validation and verification (V&V) system based on a formal specification method is of paramount importance to the development and maintenance of distributed systems.

There has been recent interest in integrating software techniques and tools at the specification level. It is also noted that an effective way of achieving such integration is by using wide-spectrum specification techniques. In view of these points, an integrated V&V system, called Integral, is presented that provides comprehensive and homogeneous analysis capabilities to both specification and testing phases of the life-cycle of distributed software systems. The underlying software model that supports various V&V activities in Integral is primarily based on Petri nets and is intended to be wide spectrum. The ultimate goal of this research is to demonstrate to the software industry, domestic or foreign, the availability and applicability of a new Petri-net-based software development paradigm. Integral is a prototype V&V system to support such a paradigm.  相似文献   


10.
Software Quality Journal - The globalization of the software industry has led to an emerging trend where software systems depend increasingly on the use of external open-source external libraries...  相似文献   

11.
This paper describes a system for the automatic verification of commerical application specifications—SOFSPEC. After having established a relationship to the other requirement specification approaches, the user interface and the database schema are presented. The database schema is based on the entity/relationship model and encompasses four entities and six relationships with a varying number of attributes. These are briefly outlined. Then, the paper describes how these entities and relations are checked against one another in order to ascertain the completeness and consistency of the specification before it is finally documented.  相似文献   

12.
Animating speech: an automated approach using speech synthesised by rules   总被引:1,自引:1,他引:0  
This paper is concerned with the problem of animating computer drawn images of speaking human characters, and particularly with the problem of reducing the cost of adequate lip synchronisation. Since the method is based upon the use of speech synthesis by rules, extended to manipulate facial parameters, and there is also a need to gather generalised data about facial expressions associated with speech, these problems are touched upon as well. Useful parallels can be drawn between the problems of speech synthesis and those of facial expression synthesis. The paper outlines the background to the work, as well as the problems and some approaches to solution, and goes on to describe work in progress in the authors' laboratories that has resulted in one apparently successful approach to low-cost animated speaking faces. Outstanding problems are noted, the chief ones being the difficulty of selecting and controlling appropriate facial expression categories: the lack or naturalness of the synthetic speech; and the need to consider the body movements and speech of all characters in an animated sequence during the animation process.  相似文献   

13.
An automated software design assistant was implemented as a part of a long-term project with the objectives of applying the computer-aided technique to the tools in a software engineering environment. A set of quantitative measures are derived based on the degree to which a particular design satisfied the attributes associated with a structured software design. The measure are then used as decision rules for a computer-aided methodology for structured design. The feasibility of the approach is also demonstrated by a case study using a small application system design problem  相似文献   

14.
15.
Although numerous empirical studies have been conducted to measure the fault detection capability of software analysis methods, few studies have been conducted using programs of similar size and characteristics. Therefore, it is difficult to derive meaningful conclusions on the relative detection ability and cost‐effectiveness of various fault detection methods. In order to compare fault detection capability objectively, experiments must be conducted using the same set of programs to evaluate all methods and must involve participants who possess comparable levels of technical expertise. One such experiment was ‘Conflict1’, which compared voting, a testing method, self‐checks, code reading by stepwise refinement and data‐flow analysis methods on eight versions of a battle simulation program. Since an inspection method was not included in the comparison, the authors conducted a follow‐up experiment ‘Conflict2’, in which five of the eight versions from Conflict1 were subjected to Fagan inspection. Conflict2 examined not only the number and types of faults detected by each method, but also the cost‐effectiveness of each method, by comparing the average amount of effort expended in detecting faults. The primary findings of the Conflict2 experiment are the following. First, voting detected the largest number of faults, followed by the testing method, Fagan inspection, self‐checks, code reading and data‐flow analysis. Second, the voting, testing and inspection methods were largely complementary to each other in the types of faults detected. Third, inspection was far more cost‐effective than the testing method studied. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

16.
Ami: promoting a quantitative approach to software management   总被引:1,自引:1,他引:0  
In this paper we first review the state of the art in software measurement. We then explain how measurement should be used in a goal-oriented manner in project management and describe the ami (application of metrics in industry) approach to achieving this. Finally we consider the ami project as an example of successful technology transfer and look at the variety of tactics used for dissemination of the approach in response to the needs of today's growing software measurement community.  相似文献   

17.
18.
《Knowledge》2006,19(4):235-247
Software testing is the technical kernel of software quality engineering, and to develop critical and complex software systems not only requires a complete, consistent and unambiguous design, and implementation methods, but also a suitable testing environment that meets certain requirements, particularly, to face the complexity issues. Traditional methods, such as analyzing each requirement and developing test cases to verify correct implementation, are not effective in understanding the software’s overall complex behavior. In that respect, existing approaches to software testing are viewed as time-consuming and insufficient for the dynamism of the modern business environment. This dynamics requires new tools and techniques, which can be employed in tandem with innovative approaches to using and combining existing software engineering methods. This work advocates the use of a recently proposed software engineering paradigm, which is particularly suited to the construction of complex and distributed software-testing systems, which is known as Agent-Oriented Software Engineering. This methodology is a new one, which gives the basic approach to agent-based frameworks for testing.  相似文献   

19.
Kulik  P. 《IT Professional》2000,2(1):38-42
Metrics programs that create meaningful change in software practice must start with business goals in mind. Software metrics are quantitative standards of measurement for various aspects of software projects. A well-designed metrics program will support decision making by management and enhance return on the IT investment. There are many aspects of software projects that can be measured, but not all aspects are worth measuring. Starting a new metrics program or improving a current program consists of five steps: identify business goals; select metrics; gather historical data; automate measurement procedures; and use metrics in decision making  相似文献   

20.
The REBOOT approach to software reuse   总被引:17,自引:0,他引:17  
Although some companies have been successful in software reuse, many research projects on reuse have had little industrial penetration. Often the proposed technology has been too ambitious or exotic, or did not scale up. REBOOT emphasizes industrial applicability of the proposed technology in a holistic perspective: a validated method through a Methodology Handbook, a stabilized tool set around a reuse library, a training package, and initial software repositories of reusable components extracted from company-specific projects. This article presents the REBOOT approach to software reuse, covering both organizational and technical aspects and the experiences so far from the applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号