共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper analyzes the resource utilization curve devel oped by Parr. The curve is compared with several other curves, including the Rayleigh curve, a parabola, and a trapezoid, with respect to how well they fit manpower uti lization. The evaluation is performed for several projects developed in the Software Engineering Laboratory of the 6–12 man-year variety. The conclusion drawn is that the Parr curve can be made to fit the data better than the other curves. However, because of the noise in the data, it is difficult to confirm the shape of the manpower distri bution from the data alone and therefore difficult to vali date any particular model. Also, since the parameters used in the curve are not easily calculable or estimable from known data, the curve is not effective for resource estimation. 相似文献
2.
This paper presents an attempt to examine a set of basic relationships among various software development variables, such as size, effort, project duration, staff size, and productivity. These variables are plotted against each other for 15 Software Engineering Laboratory projects that were developed for NASA/Goddard Space Flight Center by Computer Sciences Corporation. Certain relationships are derived in the form of equations, and these equations are compared with a set derived by Walston and Felix for IBM Federal Systems Division project data. Although the equations do not have the same coefficients, they are seen to have similar exponents. In fact, the Software Engineering Laboratory equations tend to be within one standard error of estimate of the IBM equations. 相似文献
3.
Richard Hamlet 《Journal of Systems and Software》1981,2(2):89-96
Most evaluations of software tools and methodologies could be called “public relations,” because they are subjective arguments given by proponents. The need for markedly increased productivity in software development is now forcing better evaluation criteria to be used. Software engineering must begin to live up to its second name by finding quantitative measures of quality. This paper suggests some evaluation criteria that are probably too difficult to carry out, criteria that may always remain subjective. It argues that these are so important that we should keep them in mind as a balance to the hard data we can obtain and should seek to learn more about them despite the difficulty of doing so.A historical example is presented as illustration of the necessity of retaining subjective criteria. High-level languages and their compilers today enjoy almost universal acceptance. It will be argued that the value of this tool has never been precisely evaluated, and if narrow measures had been applied at its inception, it would have been found wanting. This historical lesson is then applied to the problem of evaluating a novel specification and testing tool under development at the University of Maryland. 相似文献
4.
A natural ωpLω+1 hierarchy of successively more general criteria of success for inductive inference machines is described based on the size of sets of anomalies in programs synthesized by such machines. These criteria are compared to others in the literature. Some of our results are interpreted as tradeoff results or as showing the inherent relative-computational complexity of certain processes and others are interpreted from a positivistic, mechanistic philosophical stance as theorems in philosophy of science. The techniques of recursive function theory are employed including ordinary and infinitary recursion theorems. 相似文献
5.
Most extant debugging aids force their users to think about errors in programs from a low-level, unit-at-a-time perspective. Such a perspective is inadequate for debugging large complex systems, particularly distributed systems. In this paper, we present a high-level approach to debugging that offers an alternative to the traditional techniques. We describe a language, edl, developed to support this high-level approach to debugging and outline a set of tools that has been constructed to effect this approach. The paper includes an example illustrating the approach and discusses a number of problems encountered while developing these debugging tools. 相似文献
6.
A testing-based approach for constructing and refining very high-level software functionality representations such as intentions, natural language assertions, and formal specifications is presented and applied to a standard line-editing problem as an illustration. The approach involves the use of specification-based (black-box) test-case generation strategies, high-level specification formalisms, redundant or parallel development and cross-validation, and a logic programming support environment. Test-case reference sets are used as software functionality representations for the purposes of cross-validating two distinct high-level representations, and identifying ambiguities and omissions in those representations. In fact, we propose the use of successive refinements of such test reference sets as the authoritative specification throughout the software development process. Potential benefits of the approach include improvements in user/ designer communication over all life cycle phases, and an increase in the quality of specifications and designs. 相似文献
7.
Robert Cuykendall Anton Domic William H Joyner Steve C Johnson Steve Kelem Dennis McBride Jack Mostow John E Savage Gabriele Saucier 《Journal of Systems and Software》1984,4(1):7-12
In this paper we present the results of the Working Group on Design Synthesis and Measurement. This group explored the issues that separate and bind software engineering and VLSI design. The issues on which we comment are design views and tradeoff, levels of abstraction and their importance, and design methodologies and their effect on decisions. We also examine the support environments needed to facilitate design in VLSI and software engineering, state-of-the-art of silicon compilation today, and the types of problems that are best suited to silicon compilation. 相似文献
8.
We state a set of criteria that has guided the development of a metric system for measuring the quality of a largescale software product. This metric system uses the flow of information within the system as an index of system interconnectivity. Based on this observed interconnectivity, a variety of software metrics can be defined. The types of software quality features that can be measured by this approach are summarized. The data-flow analysis techniques used to establish the paths of information flow are explained and illustrated. Finally, a means of integrating various metrics and models into a comprehensive software development environment is discussed. This possible integration is explained in terms of the Gandalf system currently under development at Carnegie-Mellon University. 相似文献
9.
Jack C. Wileden John H. Sayler William E. Riddle Alan R. Segal Allan M. Stavely 《Journal of Systems and Software》1983,3(2):123-135
A technique for software system behavior specification appropriate for use in designing systems with concurrency is presented. The technique is based upon a generalized ability to define events, or significant occurrences in a software system, and then indicate whatever constraints the designer might wish to see imposed upon the ordering or simultaneity of those events. Constructs implementing this technique in the DREAM software design system are presented and illustrated. The relationship of this technique to other behavior specification techniques is also discussed. 相似文献
10.
Many software tools are interactive in nature and require a close match between the user's knowledge of how a task is to be performed and the capabilities the tool provides. This paper describes the current status of an instrumentation and analysis package to measure user performance in an interactive system. A prototype measurement system is considered to evaluate a screen editor and to develop models of user behavior. 相似文献
11.
A finite automaton with multiplication (FAM) is a finite automaton with a register which is capable of holding any positive rational number. The register can be multiplied by any of a fixed number of rationals and can be tested for value 1. Closure properties and decision problems for various types of FAM's (e.g. two-way, one-way, nondeterministic, deterministic) are investigated. In particular, it is shown that the languages recognized by two-way deterministic FAM's are of tape complexity log n and time complexity n3. Some decision problems related to vector addition systems are also studied. 相似文献
12.
Program testing techniques can be classified in many ways. One classification is that of “black box” vs. “white box” testing. In black box testing, test data are selected according to the purpose of the program independent of the manner in which the program is actually coded. White box testing, on the other hand, makes use of the properties of the source code to guide the testing process. A white box testing strategy, which involves integrating a previously validated module into a software system is described. It is shown that, when doing the integration testing, it is not enough to treat the module as a “black box,” for otherwise certain integration errors may go undetected. For example, an error in the calling program may cause an error in the module's input which only results in an error in the module's output along certain paths through the module. These errors can be classified as Integration Domain Errors, and Integration Computation Errors. The results indicate that such errors can be detected by the module by retesting a set of paths whose cardinality depends only on the dimensionality of the module's input for integration domain errors, and on the dimensionality of the module's inputs and outputs for integration computation errors. In both cases the number of paths that need be retested do not depend on the module's path complexity. An example of the strategy as applied to the path testing of a COBOL program is presented. 相似文献
13.
This paper introduces a modified version of path expressions called Path Rules which can be used as a debugging mechanism to monitor the dynamic behavior of a computation. Path rules have been implemented in a remote symbolic debugger running on the Three Rivers Computer Corporation PERQ computer under the Accent operating system. 相似文献
14.
Vaclav Rajlich 《Journal of Systems and Software》1985,5(1):81-88
In this paper, rigorous application of stepwise refinement is explored. The steps of definition, decomposition, and completion are described, where completion is a newly introduced step. This combination of steps extends the use of stepwise refinement to larger systems. The notions of range, active objects, and backlog interface are introduced. Verification of incomplete programs via interactive testing is described. The paradigm is demonstrated in an example. The relationship between the paradigm and the current programming languages is considered. It is argued that the WHILE-DO loop is a harmful construct from this point of view. 相似文献
15.
With the increased use of software in safety critical systems, software safety has become an important factor in system quality. This paper describes a technique, software fault tree analysis, for the safety analysis of software. The technique interfaces with hardware fault tree analysis to allow the safety of the entire system to be maximized. Experience with the technique and its practicality are discussed. 相似文献
16.
17.
This article describes an algorithm for incremental parsing of expressions in the context of syntax-directed editors for programming languages. Since a syntax-directed editor represents programs as trees and statements and expressions as nodes in trees, making minor modifications in an expression can be difficult. Consider, for example, changing a “ + ” operator to a “1” operator or adding a short subexpression at a syntactically but not structurally correct position, such as inserting “) 1 (d“ at the # mark in” (a + b # + c)”. To make these changes in a typical syntax-directed editor, the user must understand the tree structure and type a number of tree-oriented construction and manipulation commands. This article describes an algorithm that allows the user to think in terms of the syntax of the expression as it is displayed on the screen (in infix notation) rather than in terms of its internal representation (which is effectively prefix), while maintaining the benefits of syntax-directed editing. This algorithm is significantly different from other incremental parsing algorithms in that it does not involve modifications to a traditional parsing algorithm or the overhead of maintaining a parser stack or any data structure other than the syntax tree. Instead, the algorithm applies tree transformations, in real-time as each token is inserted or deleted, to maintain a correct syntax tree. 相似文献
18.
19.
As the cost of programming becomes a major component of the cost of computer systems, it becomes imperative that program development and maintenance be better managed. One measurement a manager could use is programming complexity. Such a measure can be very useful if the manager is confident that the higher the complexity measure is for a programming project, the more effort it takes to complete the project and perhaps to maintain it. Until recently most measures of complexity were based only on intuition and experience. In the past 3 years two objective metrics have been introduced, McCabe's cyclomatic number v(G) and Halstead's effort measure E. This paper reports an empirical study designed to compare these two metrics with a classic size measure, lines of code. A fourth metric based on a model of programming is introduced and shown to be better than the previously known metrics for some experimental data. 相似文献