首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
As software systems become increasingly massive, the advantages of automated transformation tools are clearly evident. These tools allow the machine to both reason about and manipulate high-level source code. They enable off-loading of mundane and laborious programming tasks from human developer to machine, thereby reducing cost and development time frames.Although there has been much work in software transformation, there still exist many hurdles in realizing this technology in a commercial domain. From our own experience, there are two significant problems that must be addressed before transformation technology can be usefully applied in a commercial setting. These are: (1) Avoiding disruption of the style (i.e., layout and commenting) of source code and the introduction of any undesired modifications that can occur as a side effect of the transformation process. (2) Correct automated handling of C preprocessing and the presentation of a semantically correct view of the program during transformation. Many existing automated transformation tools require source to be manually modified so that preprocessing constructs can be parsed. The real semantic of the program remains obscured resulting in the need for complicated analysis during transformation. Many systems also resort to pretty printing to generate transformed programs, which inherently disrupts coding style. In this paper we describe our own C/C++ transformation system, Proteus, that addresses both these issues. It has been tested on millions of lines of commercial C/C++ code and has been shown to meet the stringent criteria laid out by Lucent’s own software developers.  相似文献   

2.
Analyses of C source code usually ignore the C preprocessor because of its complexity. Instead, these analyses either define their own approximate parser or scanner, or else they require that their input already be preprocessed. Neither approach is entirely satisfactory: the first gives up accuracy (or incurs large implementation costs), while the second loses the preprocessor‐based abstractions. We describe a framework that permits analyses to be expressed in terms of both preprocessing and parsing actions, allowing the implementer to focus on the analysis. We discuss an implementation of such a framework that embeds a C preprocessor, a parser, and a Perl interpreter for the action ‘hooks’. Many common software engineering analyses can be written surprisingly easily using our implementation, replacing numerous ad‐hoc tools. The framework's integration of the preprocessor and the parser further enables some analyses that otherwise would be especially difficult to perform. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Automated program transformation holds promise for a variety of software life cycle endeavors, particularly where the size of legacy systems makes manual code analysis, re-engineering, and evolution difficult and expensive. But constructing highly scalable transformation tools supporting modern languages in full generality is itself a painstaking and expensive process. This cost can be managed by developing a common transformation system infrastructure re-useable by derived tools that each address specific tasks, thus leveraging the infrastructure costs. This paper describes the Design Maintenance System (DMS1), a practical, commercial program analysis and transformation system, and discusses how it was employed to construct a custom modernization tool being applied to a large C++ avionics system. The tool transforms components developed in a 1990s-era component style to a more modern CORBA-like component framework, preserving functionality.  相似文献   

4.
A software repository provides a central information source for understanding and reengineering code in a software project. Complex reverse engineering tools can be built by analyzing information stored in the repository without reparsing the original source code. The most critical design aspect of a repository is its data model, which directly affects how effectively the repository supports various analysis tasks. This paper focuses on the design rationales behind a data model for a C++ software repository that supports reachability analysis and dead code detection at the declaration level. These two tasks are frequently needed in large software projects to help remove excess software baggage, select regression tests and support software reuse studies. The language complexity introduced by class inheritance, friendship, and template instantiation in C++ requires a carefully designed model to catch all necessary dependencies for correct reachability analysis. We examine the major design decisions and their consequences in our model and illustrate how future software repositories can be evaluated for completeness at a selected abstraction level. Examples are given to illustrate how our model also supports variants of reachability analysis: impact analysis, class visibility analysis, and dead code detection. Finally, we discuss the implementation and experience of our analysis tools on a few C++ software projects  相似文献   

5.
通过编译的C++程序代码并不一定保证代码中不存在缺陷。代码中可能依然隐含了安全、设计或是风格上的缺陷,从而导致程序运行时出现内存泄露、指针误用等现象,或导致程序代码不清晰、可读性差。为了有效查找这些缺陷,探讨了可定制缺陷规则的C++代码缺陷自动检测技术,介绍了两种缺陷定位方法,给出了一种基于XPath技术的缺陷规则定制方法,设计并实现了一种代码缺陷自动检测工具CDD(C++ defect detector),并通过实验证明了缺陷定位方法的有效性以及CDD的易用性。  相似文献   

6.
Code transformation and analysis tools provide support for software engineering tasks such as style checking, testing, calculating software metrics as well as reverse‐ and re‐engineering. In this paper we describe the architecture and the applications of JTransform, a general Java source code processing and transformation framework. It consists of a Java parser generating a configurable parse tree and various visitors (transformers, tree evaluators) which produce different kinds of outputs. While our framework is written in Java, the paper further opens an opportunity for a new generation of XML‐based source code tools. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
ContextThe number of students enrolled in universities at standard and on-line programming courses is rapidly increasing. This calls for automated evaluation of students assignments.ObjectiveWe aim to develop methods and tools for objective and reliable automated grading that can also provide substantial and comprehensible feedback. Our approach targets introductory programming courses, which have a number of specific features and goals. The benefits are twofold: reducing the workload for teachers, and providing helpful feedback to students in the process of learning.MethodFor sophisticated automated evaluation of students’ programs, our grading framework combines results of three approaches (i) testing, (ii) software verification, and (iii) control flow graph similarity measurement. We present our tools for software verification and control flow graph similarity measurement, which are publicly available and open source. The tools are based on an intermediate code representation, so they could be applied to a number of programming languages.ResultsEmpirical evaluation of the proposed grading framework is performed on a corpus of programs written by university students in programming language C within an introductory programming course. Results of the evaluation show that the synergy of proposed approaches improves the quality and precision of automated grading and that automatically generated grades are highly correlated with instructor-assigned grades. Also, the results show that our approach can be trained to adapt to teacher’s grading style.ConclusionsIn this paper we integrate several techniques for evaluation of student’s assignments. The obtained results suggest that the presented tools can find real-world applications in automated grading.  相似文献   

8.
Scientific software production dates back to the days before the computer science discipline obtained its own name. Over the past 76 years, scientists have been producing software, which means that most of the modern techniques and software engineering methods available these days did not exist while part of this process was taking place. Change-driven development was born as a new approach to maintain and develop scientific software. Founded on the principles of software essence (changeability, complexity, intangibility, and conformity), integrated development tools, and automated source code transformation. This new, agile approach takes change as a working unit devised to drive the entire development process, which is performed in a four-stage cycle. One of the most interesting approaches to apply change-driven development on scientific software is to update, modernize and even parallelize sequential programs that have been written 20 or 30 years ago and are still running in production environments. This process will be thoroughly described and implemented. Two successful case studies will be presented and analyzed in depth.  相似文献   

9.
A major thrust of modern software engineering methods, languages, and tools is to promote software visibility and to present information about the underlying software architecture. With large, complex software systems, automated tools are indispensable for identifying the architectural components, the structure that interconnects them, and other subtle dependencies. This article describes the construction of an Ada System Dependency Analyzer (SDA), a software architecture analysis tool that generates a quantitative snapshot of an Ada application's software architecture. The SDA can process thousands of Ada source files during a single run and report on them as a group of files comprising a single Ada system. Our SDA tool identifies Ada source code dependencies on COTS products such as operating systems, compilers, the X Window System, and on routines written in other languages, and can thus predict software portability and reliability problems. It rapidly and accurately processes 24,000 lines of code per minute (a time-consuming, if not impossible, operation if done manually) and has successfully processed more than seven million lines of code in eight complex systems. Although originally developed for Ada, our methods and the technology we adopted will let us construct analogous tools for other programming languages such as C, C++, Cobol; and PL/I  相似文献   

10.
预处理在C/C 中发挥着重要作用,然而这些预处理功能存在着一些缺陷,例如在头文件包含进来时,无法改变头文件中的内容;代码的重用性不高;大量重复代码等等。本文提出用一种高级配置语言XVCL(XML-basedVariantConfigura-tionLanguage)代替原来的预处理机制,来克服以上提出的问题。文件被组织为树形结构,并定义了利于提高重用性的变量作用域机制。文章通过一个实例来验证本文提出的方法的有效性。  相似文献   

11.
Many tasks in software engineering can be characterized as source to source transformations. Design recovery, software restructuring, forward engineering, language translation, platform migration, and code reuse can all be understood as transformations from one source text to another. The tree transformation language, TXL, is a programming language and rapid prototyping system specifically designed to support rule-based source to source transformation. Originally conceived as a tool for exploring programming language dialects, TXL has evolved into a general purpose source transformation system that has proven well suited to a wide range of software maintenance and reengineering tasks, including the design recovery, analysis and automated reprogramming of billions of lines of commercial Cobol, PL/I, and RPG code for the Year 2000. In this paper, we introduce the basic features of modern TXL and its use in a range of software engineering applications, with an emphasis on how each task can be achieved by source transformation.  相似文献   

12.
通常的C/C 预处理器是一个宏处理器,在编译前自动地把源文件转换为编译器可识别的形式。传统的预处理方法基于文本行替换,没有考虑到具体的上下文环境。这种预处理机制在文件包含、宏作用域、头文件关系上存在着一些缺陷,会影响工程项目代码重用,降低程序的可维护性、可扩展性等。通过从分析c预处理器缺陷出发,并利用FOG[1]及其语言可以得到一种基于元变量和元函数的语法替换机制的解决方案。  相似文献   

13.
Joseph L. Steffen 《Software》1984,14(4):323-334
Debugging tools are usually highly machine and compiler dependent programs that are either impossible to move or require a tremendous effort to move to another machine. This paper describes Ctrace, a portable debugging tool for the C language that provides the same, easy-to-use debugging environment on all machines. A similar debugger can be written for any language. Ctrace is a preprocessor that inserts source language debugging code into the program before compilation that, at execution time, prints the text of each source language statement along with the values of variables it uses and modifies. Redundant trace output from program loops is detected and eliminated. Tracing can be limited to selected statements or functions. Ctrace has shown its usefulness and popularity by its installation on over 200, computers at Bell Laboratories. It has proven its portability by being used to test software on four different operating systems and nine different processors.  相似文献   

14.
This paper presents an approach for automated selection of tools for turning and boring operations. A prototype is presented that considers all toolholder and insert parameters in the selection process. The design offers flexibility, expandability, and the capability of selecting tools from any tool manufacturer. The resulting software system could be used by a machine manufacturer, a tool consultant, or a shop-floor engineer to modify existing tooling on the machine, recommend new tooling for a product, and solve tool-related problems.  相似文献   

15.
Many architectural languages have been proposed in the last 15 years, each one with the chief aim of becoming the ideal language for specifying software architectures. What is evident nowadays, instead, is that architectural languages are defined by stakeholder concerns. Capturing all such concerns within a single, narrowly focused notation is impossible. At the same time, it is also impractical to define and use a “universal” notation, such as UML. As a result, many domain-specific notations for architectural modeling have been proposed, each one focusing on a specific application domain, analysis type, or modeling environment. As a drawback, a proliferation of languages exists, each one with its own specific notation, tools, and domain specificity. No effective interoperability is possible to date. Therefore, if a software architect has to model a concern not supported by his own language/tool, he has to manually transform (and, eventually, keep aligned) the available architectural specification into the required language/tool. This paper presents DUALLy, an automated framework that allows architectural languages and tools interoperability. Given a number of architectural languages and tools, they can all interoperate thanks to automated model transformation techniques. DUALLy is implemented as an Eclipse plugin. Putting it in practice, we apply the DUALLy approach to the Darwin/FSP ADL and to a UML2.0 profile for software architectures. By making use of an industrial complex system, we transform a UML software architecture specification in Darwin/FSP, make some verifications by using LTSA, and reflect changes required by the verifications back to the UML specification.  相似文献   

16.
In areas as diverse as earth remote sensing, astronomy, and medical imaging, image acquisition technology has undergone tremendous improvements in recent years. The vast amounts of scientific data are potential treasure-troves for scientific investigation and analysis. Unfortunately, advances in our ability to deal with this volume of data in an effective manner have not paralleled the hardware gains. While special-purpose tools for particular applications exist, there is a dearth of useful general-purpose software tools and algorithms which can assist a scientist in exploring large scientific image databases. This paper presents our recent progress in developing interactive semi-automated image database exploration tools based on pattern recognition and machine learning technology. We first present a completed and successful application that illustrates the basic approach: the SKICAT system used for the reduction and analysis of a 3 terabyte astronomical data set. SKICAT integrates techniques from image processing, data classification, and database management. It represents a system in which machine learning played a powerful and enabling role, and solved a difficult, scientifically significant problem. We then proceed to discuss the general problem of automated image database exploration, the particular aspects of image databases which distinguish them from other databases, and how this impacts the application of off-the-shelf learning algorithms to problems of this nature. A second large image database is used to ground this discussion: Magellan's images of the surface of the planet Venus. The paper concludes with a discussion of current and future challenges.  相似文献   

17.
孙小祥  陈哲 《计算机科学》2021,48(1):268-272
随着软件运行时验证技术的发展,出现了许多面向C语言的运行时内存安全验证工具。这些工具大多是基于源代码或者中间代码插桩技术来实现内存安全的运行时检测。但是,其中一些没有经过严格证明的验证工具往往存在两方面的问题,一是插桩程序的加入可能会改变源程序的行为及语义,二是插桩程序并不能有效保证内存安全。为了解决这些问题,文中提出了一种使用Coq定理证明器来判定内存安全验证工具算法是否正确的形式化方法,并使用该方法对C语言运行时验证工具Movec的动态检测算法的正确性进行了证明。对安全规范性质的证明结果表明了Movec的内存安全性动态检测算法是正确的。  相似文献   

18.
GRADE, a software environment for machine translation is described. It has been developed for the Mu machine translation project, which was supported by the Science & Technology Agency of the Japanese Government. GRADE consists of 3 components: (1) a grammar writing language based on flexible tree-to-tree transformation rules with a control mechanism and an interpreter; (2) software tools for constructing and maintaining grammar rules; and (3) software tools for developing dictionary databases which are based on the concept ofneutral dictionary. In this paper, these software packages are discussed from the viewpoint of the development of a large scale machine translation system.The authors thank their colleagues in the Mu project, who helped in the development of the system and already returned to their private companies to develop their own machine translation systems. We also wish to thank the researchers at the Japan Information Center of Science and Technology, and the Electrotechnical Laboratory of Kyoto University for their cooperation.  相似文献   

19.
James Stanier  Des Watson 《Software》2012,42(1):117-130
Compilers use a variety of techniques to optimize and transform loops. However, many of these optimizations do not work when the loop is irreducible. Node splitting techniques transform irreducible loops into reducible loops, but many real‐world compilers choose to leave them unoptimized. This article describes an empirical study of irreducibility in current versions of open‐source software, and then compares them with older versions. We also study machine‐generated C code from a number of software tools. We find that irreducibility is extremely rare, and is becoming less common with time. We conclude that leaving irreducible loops unoptimized is a perfectly feasible future‐proof option due to the rarity of its occurrence in non‐trivial software. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
We study issues in verifying compilers for modern imperative and object-oriented languages. We take the view that it is not the compiler but the code generated by it which must be correct. It is this subtle difference that allows for reusing standard compiler architecture, construction methods and tools also in a verifying compiler.Program checking is the main technique for avoiding the cumbersome task of verifying most parts of a compiler and the tools by which they are generated. Program checking remaps the result of a compiler phase to its origin, the input of this phase, in a provably correct manner. We then only have to compare the actual input to its regenerated form, a basically syntactic process. The correctness proof of the generation of the result is replaced by the correctness proof of the remapping process. The latter turns out to be far easier than proving the generating process correct.The only part of a compiler where program checking does not seem to work is the transformation step which replaces source language constructs and their semantics, given, e.g., by an attributed syntax tree, by an intermediate representation, e.g., in SSA-form, which is expressing the same program but in terms of the target machine. This transformation phase must be directly proven using Hoare logic and/or theorem-provers. However, we can show that given the features of today's programming languages and hardware architectures this transformation is to a large extent universal: it can be reused for any pair of source and target language. To achieve this goal we investigate annotating the syntax tree as well as the intermediate representation with constraints for exhibiting specific properties of the source language. Such annotations are necessary during code optimization anyway.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号