共查询到20条相似文献,搜索用时 156 毫秒
1.
Keith I. Watson 《Software Quality Journal》1992,1(4):193-208
This paper describes a case study in the use of the COCOMO cost estimation model as a tool to provide an independent prognosis and validation of the schedule of a software project at IBM UK Laboratories Ltd, Hursley. Clearly case studies have the danger of being anecdotal however software engineers often work in situations where sufficient historical data is not available to calibrate models to the local environment. It is often necessary for the software engineer to attempt to use such tools on individual projects to justify their further use. This case study describes how we began to use COCOMO and concentrates on some of the problems and benefits which were encountered when trying to use COCOMO in a live development environment.The paper begins by discussing some problems in mapping the COCOMO phases on to the IBM development process. The practical aspects of gathering the development parameters of the model are described and the results of the work are presented in comparison to a schedule assessment using other prognosis techniques and the planned schedule at other milestones in the project's history. Some difficulties experienced in interpreting the data output from the model are discussed. This is followed by a brief comparison with other schedule analysis techniques used in quality assurance. We hope this case study shows that despite the problems in trying to use models such as COCOMO there are significant benefits in helping the user understand what is required to use such tools more effectively to improve software development cost estimates in the future. 相似文献
2.
N. Wirth 《Software》1971,1(4):309-333
The development of a compiler for the programming language PASCAL1 is described in some detail. Design decisions concerning the layout of program and data, the organization of the compiler including its syntax analyser, and the over-all approach to the project are discussed. The compiler is written in its own language and was implemented for the CDC 6000 computer family. The reader is expected to be familiar with Reference 1. 相似文献
3.
The performance achieved by a parallel architecture over a complete application is determined by the combination of the hardware and software modules. When we talk about hardware we mean node processing power and network parameters, while software entails all from the optimization capabilities of the compiler to the high level programming model. They interact in a non-simple way delivering variable results for different problem sizes and making the task of predicting performance a very difficult one. Performance is predictable once, given an algorithm, you can parameterize it in terms of floating-point operations needed, bandwidth and latency requirements, granularity of the problem itself and few parameters, obviously machine dependent. We attack the issue of predicting performance for a large class of regular synchronous problems on rectangular grids (only 2D in this paper). The aim of the paper is to determine, by means of dedicated small benchmarking kernels, all the machine dependent parameters. These will be used to predict and compare, over a very wide range of data set sizes, the performances of the Connection Machine CM-5, the Cray T3D and the IBM SP2 for a simple but complete application like the Conjugate Gradient solution for the Poisson equation. We show that the parameterization can be done quite accurately for all of the studied platforms, thus predicting, from measurements performed on extremely simple kernels and some algorithmic understanding, the behavior of an MPP over a very wide range of parameters. We argue in favor of adopting this methodology to produce meaningful benchmarks of MPP platforms. © 1998 John Wiley & Sons, Ltd. 相似文献
4.
5.
6.
A compiler optimization may be correct and yet be insecure. This work focuses on the common optimization that removes dead (i.e., useless) store instructions from a program. This operation may introduce new information leaks, weakening security while preserving functional equivalence. This work presents a polynomial-time algorithm for securely removing dead stores. The algorithm is necessarily approximate, as it is shown that determining whether new leaks have been introduced by dead store removal is undecidable in general. The algorithm uses taint and control-flow information to determine whether a dead store may be removed without introducing a new information leak. A notion of secure refinement is used to establish the security preservation properties of other compiler transformations. The important static single assignment optimization is, however, shown to be inherently insecure. 相似文献
7.
8.
The Garp architecture and C compiler 总被引:1,自引:0,他引:1
Various projects and products have been built using off-the-shelf field-programmable gate arrays (FPGAs) as computation accelerators for specific tasks. Such systems typically connect one or more FPGAs to the host computer via an I/O bus. Some have shown remarkable speedups, albeit limited to specific application domains. Many factors limit the general usefulness of such systems. Long reconfiguration times prevent the acceleration of applications that spread their time over many different tasks. Low-bandwidth paths for data transfer limit the usefulness of such systems to tasks that have a high computation-to-memory-bandwidth ratio. In addition, standard FPGA tools require hardware design expertise which is beyond the knowledge of most programmers. To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it. They are also investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications. They present their results in this article 相似文献
9.
M R Kibby 《Computer applications in the biosciences》1985,1(2):73-78
Electronic spreadsheets computerise the traditional layout of any tabulation or complex calculation done with pencil, paper and calculator. They therefore have great potential in aiding routine calculations which might be done by these means or with a small BASIC computer program. Their simple structure and strong affinity with traditional methods make them particularly suitable for those who have not yet mastered the art of programming. However, a necessarily brief review of their application to science and technology demonstrates that this potential is not being realised in comparison with their wide-spread usage in the business world. The application of both Multiplan and Visicalc running respectively on the Macintosh and the Apple IIe microcomputers in four types of calculation is demonstrated: tabulation, curve-fitting and statistics, simulation, and numerical approximation. Advantages are found in the concurrent display of data and results, the ease of correction or modification of data and the escape from traditional linear programming methods. The spreadsheet format imposes its own constraints. It is not so flexible as BASIC, it demands more memory and may have a slower execution time than a program written in a high-level language, and it is more difficult to produce graphical output. 相似文献
10.
This paper describes a verified compiler for PreScheme, the implementation language for thevlisp run-time system. The compiler and proof were divided into three parts: A transformational front end that translates source text into a core language, a syntax-directed compiler that translates the core language into a combinator-based tree-manipulation language, and a linearizer that translates combinator code into code for an abstract stored-program machine with linear memory for both data and code. This factorization enabled different proof techniques to be used for the different phases of the compiler, and also allowed the generation of good code. Finally, the whole process was made possible by carefully defining the semantics ofvlisp PreScheme rather than just adopting Scheme's. We believe that the architecture of the compiler and its correctness proof can easily be applied to compilers for languages other than PreScheme.This work was supported by Rome Laboratory of the United States Air Force, contract No. F19628-89-C-0001, through the MITRE Corporation, and by NSF and DARPA under NSF grants CCR-9002253 and CCR-9014603. Author's current address: Department of Computer Science and Engineering, Oregon Graduate Institute, P.O. Box 91000, Portland, OR 97291-1000.The work reported here was supported by Rome Laboratory of the United States Air Force, contract No. F19628-89-C-0001. Preparation of this paper was generously supported by The MITRE Corporation.This work was supported by Rome Laboratory of the United States Air Force, contract No. F19628-89-C-0001, through the MITRE Corporation, and by NSF and DARPA under NSF grants CCR-9002253 and CCR-9014603. 相似文献
11.
Commonly, simulation by using an existing network simulation tool or a simulator developed from scratch is employed for validation of analytical network performance models. An analytical model of star-shaped wireless sensor networks has been proposed in the literature in which, upon receiving a query from the coordinator, each sensor node sends one data frame to it by executing the IEEE 802.15.4 unslotted carrier-sense multiple access with collision avoidance algorithm. The model consists of expressions for calculation of the probability of successful receipt of the data at a certain time and the like. The authors of the model have written a special simulation program in order to validate the expressions. Our aim was to employ probabilistic model checker PRISM instead. PRISM only requires the user to formally specify the network as a kind of state machine and the queries about the probabilities sought in the form of logical formulas. It finds the probabilities automatically and can present them on graphs. We show how to specify the networks formally in such a way that all the expressions from the analytical model can be validated with PRISM. For those networks containing a few nodes, the validation can be carried out by normal model checking, which, in contrast to the simulation, always checks all the possible network behaviors, whereas statistical model checking can be used for the larger networks. 相似文献
12.
13.
The multiflow trace scheduling compiler 总被引:3,自引:0,他引:3
P. Geoffrey Lowney Stefan M. Freudenberger Thomas J. Karzes W. D. Lichtenstein Robert P. Nix John S. O'Donnell John C. Ruttenberg 《The Journal of supercomputing》1993,7(1-2):51-142
The Multiflow compiler uses the trace scheduling algorithm to find and exploit instruction-level parallelism beyond basic blocks. The compiler generates code for VLIW computers that issue up to 28 operations each cycle and maintain more than 50 operations in flight. At Multiflow the compiler generated code for eight different target machine architectures and compiled over 50 million lines of Fortran and C applications and systems code. The requirement of finding large amounts of parallelism in ordinary programs, the trace scheduling algorithm, and the many unique features of the Multiflow hardware placed novel demands on the compiler. New techniques in instruction scheduling, register allocation, memory-bank management, and intermediate-code optimizations were developed, as were refinements to reduce the overhead of trace scheduling. This article describes the Multiflow compiler and reports on the Multiflow practice and experience with compiling for instruction-level parallelism beyond basic blocks. 相似文献
14.
Dick Grune 《Software》1979,9(7):575-593
Requirements are formulated for a tag-list algorithm, i.e. the algorithm used in a compiler for handling the symbol table or identifier list. Starting from a very general tag-list algorithm, 18 practical versions are developed and their merits judged. Although the final choice (binary search in a diluted table) depends on the details of the application, the main part of this article is not devoted to that final choice itself but rather to ways of reaching it. 相似文献
15.
16.
The authors describe a technique for porting a modern language that makes it possible to port the language quickly and still get fast execution. They relate the practical experience they gained when porting the compiler to different environments. They concentrate on transportation problems of compilers that generate machine code rather than those that generate interpreter code. The authors' approach is based on the definition of a universal operating-system interface that must be implemented on the target machine to install the compiler. They ported the Modula-2/68K compiler, which was developed at their institute and has successfully been installed at external sites. Of the two porting procedures they offered-source-code cross development and object-code transportation-the external sites preferred the latter because it requires less effort 相似文献
17.
A bioartificial pancreas is a system which contains isolated islets of Langerhans protected against immune rejection by an artificial membrane, permeable to glucose and insulin, but not to lymphocytes and immunoglobulins. However, it is necessary to design a device which performs as a closed-loop insulin delivery system, more specifically which rapidly responds to a change in the recipient's blood glucose concentration by an appropriate change in insulin release. We have designed a system intended to be connected as an arteriovenous shunt of the recipient; islets are placed between two flat ultrafiltration membranes, and blood circulates successively above the upper, and below the lower, membrane, in reverse direction. A complete kinetic model of glucose transfer from blood to the islet compartment, of insulin generation by the islets displaying a biphasic insulin pattern, and of insulin transfer into the bloodstream was described, and parameters were calculated on the basis of experimental data obtained when islets of Langerhans were perfused in vitro with a synthetic buffer. The resulting calculations indicated that both diffusional and convective transfers were involved in glucose and insulin mass transfer across the membrane, the contribution of diffusion being the most important. The geometry of the system was therefore modified in order to decrease the resistance to flow inside the blood channel. This should increase, at a given hydrostatic pressure, the blood flow rate, and thereby improve the diffusional transfer of insulin. This should also decrease the thrombogenicity of the device.(ABSTRACT TRUNCATED AT 250 WORDS) 相似文献
18.
Arquimedes Canedo Ben A. Abderazek Masahiro Sowa 《Microprocessors and Microsystems》2009,33(2):129-138
Queue processors are a viable alternative for high performance embedded computing and parallel processing. We present the design and implementation of a compiler for a queue-based processor. Instructions of a queue processor implicitly reference their operands making the programs free of false dependencies. Compiling for a queue machine differs from traditional compilation methods for register machines. The queue compiler is responsible for scheduling the program in level-order manner to expose natural parallelism and calculating instructions relative offset values to access their operands. This paper describes the phases and data structures used in the queue compiler to compile C programs into assembly code for the QueueCore, an embedded queue processor. Experimental results demonstrate that our compiler produces good code in terms of parallelism and code size when compared to code produced by a traditional compiler for a RISC processor. 相似文献
19.
PipeRench: a reconfigurable architecture and compiler 总被引:1,自引:0,他引:1
With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time 相似文献
20.
The emergence of Web 2.0 technology provides more opportunities to foster online communication and sharing in an e-learning environment. The purpose of this study was to develop a Web 2.0 annotation system, MyNote, based on the Web 2.0 core concepts which emphasize ease of access and active sharing and then to gain an understanding about people’s perceptions of MyNote from a usability perspective. MyNote was employed on multimedia learning objects in a Learning Management System (LMS), and out of the LMS as well in this study. The evaluation results showed that, with factor analysis, interactivity, usefulness, helpfulness, and willingness for future use were categorized to represent the perceptions of MyNote. It was also found that the factors of interactivity and helpfulness were statistically significant to predict the future use of MyNote. Lastly, the habit of taking notes also affected learners’ perceptions of using MyNote. 相似文献