首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
As part of its type‐safety regime, Java's semantics require precise exceptions at runtime when programs attempt out‐of‐bound array accesses. This paper describes a Java implementation that utilizes a multiphase approach to identifying safe array accesses. This approach reduces runtime overhead by spreading the out‐of‐bounds checking effort across multiple phases of compilation and execution: production of mobile code from source code, just‐in‐time (JIT) compilation in the virtual machine, application method invocations, and the execution of individual array accesses. The code producer uses multiple passes (including common subexpression elimination, load elimination, induction variable substitution, speculation of dynamically verified invariants, and inequality constraint analysis) to identify unnecessary bounds checks and prove their redundancy. During class‐loading and JIT compilation, the virtual machine verifies the proofs, inserts code to dynamically validate speculated invariants, and generates code specialized under the assumption that the speculated invariants hold. During each runtime method invocation, the method parameters and other inputs are checked against the speculated invariants, and execution reverts to unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort associated with bounds‐checking array access to phases that are executed earlier and less frequently, thus, reducing runtime overhead. Experimental results show that this approach is able to eliminate more bounds checks than prior approaches with minimal overhead during JIT compilation. These results also show the contribution of each of the passes to the overall elimination. Furthermore, this approach increased the speed at which the benchmarks executed by up to 16%. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
ContextModel-Driven Development (MDD) is a paradigm that prescribes building conceptual models that abstractly represent the system and generating code from these models through transformation rules. The literature is rife with claims about the benefits of MDD, but they are hardly supported by evidences.ObjectiveThis experimental investigation aims to verify some of the most cited benefits of MDD.MethodWe run an experiment on a small set of classes using student subjects to compare the quality, effort, productivity and satisfaction of traditional development and MDD. The experiment participants built two web applications from scratch, one where the developers implement the code by hand and another using an industrial MDD tool that automatically generates the code from a conceptual model.ResultsOutcomes show that there are no significant differences between both methods with regard to effort, productivity and satisfaction, although quality in MDD is more robust to small variations in problem complexity. We discuss possible explanations for these results.ConclusionsFor small systems and less programming-experienced subjects, MDD does not always yield better results than a traditional method, even regarding effort and productivity. This contradicts some previous statements about MDD advantages. The benefits of developing a system with MDD appear to depend on certain characteristics of the development context.  相似文献   

3.
The desire to predict the effort in developing or explain the quality of software has led to the proposal of several metrics in the literature. As a step toward validating these metrics, the Software Engineering Laboratory has analyzed the Software Science metrics, cyclomatic complexity, and various standard program measures for their relation to 1) effort (including design through acceptance testing), 2) development errors (both discrete and weighted according to the amount of time to locate and frix), and 3) one another. The data investigated are collected from a production Fortran environment and examined across several projects at once, within individual projects and by individual programmers across projects, with three effort reporting accuracy checks demonstrating the need to validate a database. When the data come from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither Software Science's E metric, cyclomatic complexity nor source lines of code appears to relate convincingly better with effort than the others  相似文献   

4.
The relationship among software design quality, development effort, and governance practices is a traditional research problem. However, the extent to which consolidated results on this relationship remain valid for open source (OS) projects is an open research problem. An emerging body of literature contrasts the view of open source as an alternative to proprietary software and explains that there exists a continuum between closed and open source projects. This paper hypothesizes that as projects approach the OS end of the continuum, governance becomes less formal. In turn a less formal governance is hypothesized to require a higher-quality code as a means to facilitate coordination among developers by making the structure of code explicit and facilitate quality by removing the pressure of deadlines from contributors. However, a less formal governance is also hypothesized to increase development effort due to a more cumbersome coordination overhead. The verification of research hypotheses is based on empirical data from a sample of 75 major OS projects. Empirical evidence supports our hypotheses and suggests that software quality, mainly measured as coupling and inheritance, does not increase development effort, but represents an important managerial variable to implement the more open governance approach that characterizes OS projects which, in turn, increases development effort.  相似文献   

5.
Regular code, which includes repetitions of the same basic pattern, has been shown to have an effect on code comprehension: a regular function can be just as easy to comprehend as a non-regular one with the same functionality, despite being significantly longer and including more control constructs. It has been speculated that this effect is due to leveraging the understanding of the first instances to ease the understanding of repeated instances of the pattern. To verify and quantify this effect, we use eye tracking to measure the time and effort spent reading and understanding regular code. The experimental subjects were 18 students and 2 faculty members. The results are that time and effort invested in the initial code segments are indeed much larger than those spent on the later ones, and the decay in effort can be modeled by an exponential model. This shows that syntactic code complexity metrics (such as LOC and MCC) need to be made context-sensitive, e.g. by giving reduced weight to repeated segments according to their place in the sequence. However, it is not the case that repeated code segments are actually read more and more quickly. Rather, initial code segments receive more focus and are looked at more times, while later ones may be only skimmed. Further, a few recurring reading patterns have been identified, which together indicate that in general code reading is far from being purely linear, and exhibits significant variability across experimental subjects.  相似文献   

6.
Most of the research effort towards verification of concurrent software has focused on multithreaded code. On the other hand, concurrency in low-end embedded systems is predominantly based on interrupts. Low-end embedded systems are ubiquitous in safety-critical applications such as those supporting transportation and medical automation; their verification is important. Although interrupts are superficially similar to threads, there are subtle semantic differences between the two abstractions. This paper compares and contrasts threads and interrupts from the point of view of verifying the absence of race conditions. We identify a small set of extensions that permit thread verification tools to also verify interrupt-driven software, and we present examples of source-to-source transformations that turn interrupt-driven code into semantically equivalent thread-based code that can be checked by a thread verifier. Finally, we demonstrate a proof-of-concept program transformation tool that converts interrupt-driven sensor network applications into multithreaded code, and we use an existing tool to find race conditions in these applications.  相似文献   

7.
Modern Code Review (MCR) has been widely used by open source and proprietary software projects. Inspecting code changes consumes reviewers much time and effort since they need to comprehend patches, and many reviewers are often assigned to review many code changes. Note that a code change might be eventually abandoned, which causes waste of time and effort. Thus, a tool that predicts early on whether a code change will be merged can help developers prioritize changes to inspect, accomplish more things given tight schedule, and not waste reviewing effort on low quality changes. In this paper, motivated by the above needs, we build a merged code change prediction tool. Our approach first extracts 34 features from code changes, which are grouped into 5 dimensions: code, file history, owner experience, collaboration network, and text. And then we leverage machine learning techniques such as random forest to build a prediction model. To evaluate the performance of our approach, we conduct experiments on three open source projects (i.e., Eclipse, LibreOffice, and OpenStack), containing a total of 166,215 code changes. Across three datasets, our approach statistically significantly improves random guess classifiers and two prediction models proposed by Jeong et al. (2009) and Gousios et al. (2014) in terms of several evaluation metrics. Besides, we also study the important features which distinguish merged code changes from abandoned ones.  相似文献   

8.
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the high-performance computing (HPC) literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well as a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15 times improvement compared with the original code at a given concurrency. Additionally, we present a detailed analysis of each optimization, which reveals surprising hardware bottlenecks and software challenges for future multicore systems and applications.  相似文献   

9.
ContextCode generators can automatically perform some tedious and error-prone implementation tasks, increasing productivity and quality in the software development process. Most code generators are based on templates, which are fundamentally composed of text expansion statements. To build templates, the code of an existing, tested and validated implementation may serve as reference, in a process known as templatization. With the dynamics of software evolution/maintenance and the need for performing changes in the code generation templates, there is a loss of synchronism between the templates and this reference code. Additional effort is required to keep them synchronized.ObjectiveThis paper proposes automation as a way to reduce the extra effort needed to keep templates and reference code synchronized.MethodA mechanism was developed to semi-automatically detect and propagate changes from reference code to templates, keeping them synchronized with less effort. The mechanism was also submitted to an empirical evaluation to analyze its effects in terms of effort reduction during maintenance/evolution templatization.ResultsIt was observed that the developed mechanism can lead to a 50% reduction in the effort needed to perform maintenance/evolution templatization, when compared to a manual approach. It was also observed that this effect depends on the nature of the evolution/maintenance task, since for one of the tasks there was no observable advantage in using the mechanism. However, further studies are needed to better characterize these tasks.ConclusionAlthough there is still room for improvement, the results indicate that automation can be used to reduce effort and cost in the maintenance and evolution of a template-based code generation infrastructure.  相似文献   

10.
The correct configuration of the control code is a critical part of every process control system engineering project. To ensure the conformity of the implemented control functions with the customer’s specifications, test activities, e.g., the factory acceptance test (FAT), are conducted in every control engineering project. For the past several years, control code tests have increasingly been carried out on simulation models to increase test coverage and timeliness. Despite the advantages that simulation methods offer, the manual effort for generating an applicable simulation model is still high. To reduce this effort, an automated model generation is proposed in this paper. The models automatically generated by this approach provide a modeling level of detail that matches the requirements for the tests of the control code on the base automation level. Therefore, these models do not need to be as detailed as the high-fidelity models which are used for, e.g., model predictive control (MPC) applications. Within this paper, the authors describe an approach to automatically generate simulation models for control code tests based on given computer aided engineering (CAE) planning documents.  相似文献   

11.
Software development effort prediction is considered in several international software processes as the Capability Maturity Model-Integrated (CMMi), by ISO-15504 as well as by ISO/IEC 12207. In this paper, data of two kinds of lines of code gathered from programs developed with practices based on the Personal Software Process (PSP) were used as independent variables in three models for estimating and predicting the development effort. Samples of 163 and 80 programs were used for verifying and validating, respectively, the models. The prediction accuracy comparison among a multiple linear regression, a general regression neural network, and a fuzzy logic model was made using as criteria the magnitude of error relative to the estimate (MER) and mean square error (MSE). Results accepted the following hypothesis: effort prediction accuracy of a general regression neural network is statistically equal than those obtained by a fuzzy logic model as well as by a multiple linear regression, when new and change code and reused code obtained from short-scale programs developed with personal practices are used as independent variables.  相似文献   

12.
This work deals with the numerical evaluation of the structural response of simply supported (transversally loaded at mid-span) and cantilever (subjected to tip point loads) beams built from a commercial pultruded I-section GFRP profile. In particular, the paper addresses the beam (i) geometrically linear behaviour in service conditions, (ii) local and lateral-torsional buckling behaviour, and (iii) lateral-torsional post-buckling behaviour, including the effect of the load point of application location. The numerical results are obtained by means of (i) novel Generalised Beam Theory (GBT) beam finite element formulations, able to capture the influence of the load point of application, and (ii) shell finite element analyses carried out in the code Abaqus. These numerical results are compared with (i) the experimental values reported and discussed in the companion paper (Part 1) and (ii) values provided by analytical formulae available in the literature.  相似文献   

13.
Exercise can improve health and well-being. With this in mind, immersive virtual reality (VR) games are being developed to promote physical activity, and are generally evaluated through user studies. However, building such applications is time consuming and expensive. This paper introduces VR-Rides, an object-oriented application framework focused on the development of experiment-oriented VR exergames. Following the modular programming pattern, this framework facilitates the integration of different hardware (such as VR devices, sensors, and physical activity devices) within immersive VR experiences that overlay game narratives on Google Street View panoramas. Combining software engineering and interaction patterns, modules of VR-Rides can be easily added and managed in the Unity game engine. We evaluate the code efficiency and development effort across our VR exergames developed using VR-Rides. The reliability, maintainability, and usability of our framework are also demonstrated via code metrics analysis and user studies. The results show that investing in a systematic approach to reusing code and design can be a worthwhile effort for researchers beyond software engineering.  相似文献   

14.
The conventional method of assessing supercomputer performance by measuring the execution time of software has many shortcomings. First, effort is required to write and debug the software. Second, time on the machine is required, and additional effort is needed to verify the validity of the test. Third, alterations to the algorithm require changing the code and retiming. Fourth, a black box approach to determining machine performance leaves the user with little confidence in how well the software was optimized. We present a pencil and paper methodology for computing the execution time of vectorized loops on a Cray Research X-MP/Y-MP. With this methodology a user can accurately compute the processing rate of an algorithm before the software is actually written. When several implementations of an algorithm are designed, this methodology can be used to select the best one for development, preventing wasted coding effort on less efficient implementations. Since this methodology computes optimal machine performance, it can be used to verify the efficiency of compiler translation. Changes to algorithms are easily appraised to determine their effect on performance. While the purpose of the methodology is to compute an algorithm's execution time, a side benefit is that this technique induces the user to think in terms of optimization. Bottlenecks in the code are pinpointed, and possible options for increased performance become obvious. At E-Systems, this methodology has become an integral part of the software development of vector-intensive code. This article is written specifically for Cray Research X-MP/Y-MP supercomputers, but many of the general concepts are applicable to other machines and therefore should benefit a number of supercomputer users.  相似文献   

15.
This paper describes a tool called Source Code Review User Browser (SCRUB) that was developed to support a more effective and tool-based code review process. The tool was designed to support a large team-based software development effort of mission critical software at JPL, but can also be used for individual software development on small projects. The tool combines classic peer code review with machine-generated analyses from a customizable range of source code analyzers. All reports, whether generated by humans or by background tools, are accessed through a single uniform interface provided by SCRUB.  相似文献   

16.
Barnard  J. Price  A. 《Software, IEEE》1994,11(2):59-69
Inspection data is difficult to gather and interpret. At AT&T Bell Laboratories, the authors have defined nine key metrics that software project managers can use to plan, monitor, and improve inspections. Graphs of these metrics expose problems early and can help managers evaluate the inspection process itself. The nine metrics are: total noncomment lines of source code inspected in thousands (KLOC); average lines of code inspected; average preparation rate; average inspection rate; average effort per KLOC; average effort per fault detected; average faults detected per KLOC; percentage of reinspections; defect-removal efficiency  相似文献   

17.
Improving the efficiency of the testing process is a challenging goal. Prior work has shown that often a small number of errors account for the majority of software failures; and often, most errors are found in a small portion of a source code. We argue that prioritizing code elements before conducting testing can help testers focus their testing effort on the parts of the code most likely to expose errors. This can, in turn, promote more efficient testing of software. Keeping this in view, we propose a testing effort prioritization method to guide tester during software development life cycle. Our approach considers five factors of a component such as Influence value, Average execution time, Structural complexity, Severity and Value as inputs and produce the priority value of the component as an output. Once all components of a program have been prioritized, testing effort can be apportioned so that the components causing more frequent and/or more severe failures will be tested more thoroughly. Our proposed approach is effective in guiding testing effort as it is linked to external measure of defect severity and business value, internal measure of frequency and complexity. As a result, the failure rate is decreased and the chance of severe type of failures is also decreased in the operational environment. We have conducted experiments to compare our scheme with a related scheme. The results establish that our proposed approach that prioritizes the testing effort within the source code is able to minimize highly severed types of failures and also number of failures at the post-release time of a software system.  相似文献   

18.
Most reports of Internet collaboration refer to small scale operations among a few authors or designers. However, several projects have shown that the Internet can also be the locus for large scale collaboration. In these projects, contributors from around the world combine their individual forces and develop a product that rivals those of multibillion dollar corporations. The Apache HTTP Server Project is a case in point. This collaborative software development effort has created a robust, feature-rich HTTP server software package that currently dominates the public Internet market (46 percent compared with 16 percent for Microsoft and 12 percent for Netscape, according to a June 1997 survey published by Netcraft). The software and its source code are free, but Apache's popularity is more often attributed to performance than price. The project is managed by the Apache Group, a geographically distributed group of volunteers who use the Internet and Web to communicate, develop, and distribute the server and its related documentation. In addition, hundreds of users have contributed ideas, code, and documentation to the project  相似文献   

19.
Morozoff  Edmund 《Software, IEEE》2010,27(1):72-77
Developers should factor rework into sizing and productivity calculations when estimating software effort. Reworked code is software created during development that doesn't exist in the final build. Using lines of code as a sizing metric is helpful when estimating projects with similar domains, platforms, processes, development teams, and coding constraints. A simple method can measure new effective lines of code (nLOC)—that is, added, changed, or deleted nonblank, noncomment lines of code—during development. Analysis of three projects shows that between 19 and 40 percent of the code written wasn't in the final release, with the nLOC measured biweekly and accumulated over each project's development.  相似文献   

20.
This paper reports on a study carried out within a software development organization to evaluate the use of function points as a measure of early lifecycle software size. There were three major aims to the research: firstly to determine the extent to which the component elements of function points were independent of each other and thus appropriate for an additive model of size; secondly to investigate the relationship between effort and (1) the function point components, (2) unadjusted function points, and (3) adjusted function points, to determine whether the complexity weightings and technology adjustments were adding to the effort explanation power of the metric; and thirdly to investigate the suitability of function points for sizing in client server developments. The results show that the component parts are not independent of each other which supports an earlier study in this area. In addition the complexity weights and technology factors do not improve the effort/size model, suggesting that a simplified sizing metric may be appropriate. With respect to the third aim it was found that the function point metric revealed a much lower productivity in the client server environment. This likely is a reflection of cost of the introduction of newer technologies but is in need of further research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号