首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Most extant debugging aids force their users to think about errors in programs from a low-level, unit-at-a-time perspective. Such a perspective is inadequate for debugging large complex systems, particularly distributed systems. In this paper, we present a high-level approach to debugging that offers an alternative to the traditional techniques. We describe a language, edl, developed to support this high-level approach to debugging and outline a set of tools that has been constructed to effect this approach. The paper includes an example illustrating the approach and discusses a number of problems encountered while developing these debugging tools.  相似文献   

2.
Many software tools are interactive in nature and require a close match between the user's knowledge of how a task is to be performed and the capabilities the tool provides. This paper describes the current status of an instrumentation and analysis package to measure user performance in an interactive system. A prototype measurement system is considered to evaluate a screen editor and to develop models of user behavior.  相似文献   

3.
In this paper we present the results of the Working Group on Design Synthesis and Measurement. This group explored the issues that separate and bind software engineering and VLSI design. The issues on which we comment are design views and tradeoff, levels of abstraction and their importance, and design methodologies and their effect on decisions. We also examine the support environments needed to facilitate design in VLSI and software engineering, state-of-the-art of silicon compilation today, and the types of problems that are best suited to silicon compilation.  相似文献   

4.
5.
This paper introduces a modified version of path expressions called Path Rules which can be used as a debugging mechanism to monitor the dynamic behavior of a computation. Path rules have been implemented in a remote symbolic debugger running on the Three Rivers Computer Corporation PERQ computer under the Accent operating system.  相似文献   

6.
A meta-analysis of 32 comparative studies showed that computer-based education has generally had positive effects on the achievement of elementary school pupils. These effects have been different, however, for programs of off-line computer-managed instruction (CMI) and for interactive computer-assisted instruction (CAI). The average effect in 28 studies of CAI programs was an increase in pupil achievement scores of 0.47 standard deviations, or from the 50th to the 68th percentile. The average effect in four studies of CMI programs, however, was an increase in scores of only 0.07 standard deviations. Study features were not significantly related to study outcomes.  相似文献   

7.
8.
A technique for software system behavior specification appropriate for use in designing systems with concurrency is presented. The technique is based upon a generalized ability to define events, or significant occurrences in a software system, and then indicate whatever constraints the designer might wish to see imposed upon the ordering or simultaneity of those events. Constructs implementing this technique in the DREAM software design system are presented and illustrated. The relationship of this technique to other behavior specification techniques is also discussed.  相似文献   

9.
A compiler-based specification and testing system for defining data types has been developed. The system, DAISTS (data abstraction implementation, specification, and testing system) includes formal algebraic specifications and statement and expression test coverage monitors. This paper describes our initial attempt to evaluate the effectiveness of the system in helping users produce software. In an exploratory study, subjects without prior experience with DAISTS were encouraged by the system to develop effective sets of test cases for their implementations. Furthermore, an analysis of the errors remaining in the implementations provided valuable hints about additional useful testing metrics.  相似文献   

10.
This paper presents an attempt to examine a set of basic relationships among various software development variables, such as size, effort, project duration, staff size, and productivity. These variables are plotted against each other for 15 Software Engineering Laboratory projects that were developed for NASA/Goddard Space Flight Center by Computer Sciences Corporation. Certain relationships are derived in the form of equations, and these equations are compared with a set derived by Walston and Felix for IBM Federal Systems Division project data. Although the equations do not have the same coefficients, they are seen to have similar exponents. In fact, the Software Engineering Laboratory equations tend to be within one standard error of estimate of the IBM equations.  相似文献   

11.
12.
This paper analyzes the resource utilization curve devel oped by Parr. The curve is compared with several other curves, including the Rayleigh curve, a parabola, and a trapezoid, with respect to how well they fit manpower uti lization. The evaluation is performed for several projects developed in the Software Engineering Laboratory of the 6–12 man-year variety. The conclusion drawn is that the Parr curve can be made to fit the data better than the other curves. However, because of the noise in the data, it is difficult to confirm the shape of the manpower distri bution from the data alone and therefore difficult to vali date any particular model. Also, since the parameters used in the curve are not easily calculable or estimable from known data, the curve is not effective for resource estimation.  相似文献   

13.
With the increased use of software in safety critical systems, software safety has become an important factor in system quality. This paper describes a technique, software fault tree analysis, for the safety analysis of software. The technique interfaces with hardware fault tree analysis to allow the safety of the entire system to be maximized. Experience with the technique and its practicality are discussed.  相似文献   

14.
We state a set of criteria that has guided the development of a metric system for measuring the quality of a largescale software product. This metric system uses the flow of information within the system as an index of system interconnectivity. Based on this observed interconnectivity, a variety of software metrics can be defined. The types of software quality features that can be measured by this approach are summarized. The data-flow analysis techniques used to establish the paths of information flow are explained and illustrated. Finally, a means of integrating various metrics and models into a comprehensive software development environment is discussed. This possible integration is explained in terms of the Gandalf system currently under development at Carnegie-Mellon University.  相似文献   

15.
16.
The various kinds of access decision dependencies within a predicate-based model of database protection are classified according to cost of enforcement. Petri nets and some useful extensions are described. Extended Petri nets are used to model the flow of messages and data during protection enforcement within MULTISAFE, a multimodule system architecture for secure database management. The model demonstrates that some of the stated criteria for security are met within MULTISAFE. Of particular interest is the modeling of data dependent access conditions with predicates at Petri net transitions. Tokens in the net carry the intermodule messages of MULTISAFE. Login, authorization, and database requests are traced through the model as examples. The evaluation of complex access condition predicates is described for the enforcement process. Queues of data and queues of access condition predicates are cycled through the net so that each data record is checked against each predicate. Petri nets are shown to be a useful modeling tool for database security.  相似文献   

17.
As the cost of programming becomes a major component of the cost of computer systems, it becomes imperative that program development and maintenance be better managed. One measurement a manager could use is programming complexity. Such a measure can be very useful if the manager is confident that the higher the complexity measure is for a programming project, the more effort it takes to complete the project and perhaps to maintain it. Until recently most measures of complexity were based only on intuition and experience. In the past 3 years two objective metrics have been introduced, McCabe's cyclomatic number v(G) and Halstead's effort measure E. This paper reports an empirical study designed to compare these two metrics with a classic size measure, lines of code. A fourth metric based on a model of programming is introduced and shown to be better than the previously known metrics for some experimental data.  相似文献   

18.
This paper presents the results of a study of the software complexity characteristics of a large real-time signal processing system for which there is a 6-yr maintenance history. The objective of the study was to compare values generated by software metrics to the maintenance history in order to determine which software complexity metrics would be most useful for estimating maintenance effort. The metrics that were analyzed were program size measures, software science measures, and control flow measures. During the course of the study two new software metrics were defined. The new metrics, maximum knot depth and knots per jump ratio, are both extensions of the knot count metric. When comparing the metrics to the maintenance data the control flow measures showed the strongest positive correlation.  相似文献   

19.
This article describes an algorithm for incremental parsing of expressions in the context of syntax-directed editors for programming languages. Since a syntax-directed editor represents programs as trees and statements and expressions as nodes in trees, making minor modifications in an expression can be difficult. Consider, for example, changing a “ + ” operator to a “1” operator or adding a short subexpression at a syntactically but not structurally correct position, such as inserting “) 1 (d“ at the # mark in” (a + b # + c)”. To make these changes in a typical syntax-directed editor, the user must understand the tree structure and type a number of tree-oriented construction and manipulation commands. This article describes an algorithm that allows the user to think in terms of the syntax of the expression as it is displayed on the screen (in infix notation) rather than in terms of its internal representation (which is effectively prefix), while maintaining the benefits of syntax-directed editing. This algorithm is significantly different from other incremental parsing algorithms in that it does not involve modifications to a traditional parsing algorithm or the overhead of maintaining a parser stack or any data structure other than the syntax tree. Instead, the algorithm applies tree transformations, in real-time as each token is inserted or deleted, to maintain a correct syntax tree.  相似文献   

20.
A stack-heap has been shown to be an efficient storage management scheme for programs containing both recursive and retentive control structures. The stack-heap uses a compile-time marking algorithm that determines those program modules that may need retention at run-time. Thus, instances of marked modules are allocated space in the heap during execution. All others are stored in a stack. In this paper, we present an optimistic implementation of the stack-heap in which each module instance is kept in the stack until it suspends. Upon suspension, the instance is copied into the heap where it remains for the lifetime of the instance. Some of the restrictions imposed on the programming language by the original stack-heap scheme are eliminated under this optimistic implementation. It is shown that when the original stack-heap cannot be used and both recursive and coroutine programs are likely, the optimistic implementation of the stack-heap is more efficient on the average than the heap.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号