共查询到20条相似文献,搜索用时 26 毫秒
1.
Sebastian Danicic Mark Harman John Howroyd Lahcen Ouarbya 《The Journal of Logic and Algebraic Programming》2007,72(2):191
We introduce a new non-strict semantics for a simple while language. We demonstrate that this semantics allows us to give a denotational definition of variable dependence and neededness, which is consistent with program slicing. Unlike other semantics used in variable dependence, our semantics is substitutive. We prove that our semantics is preserved by traditional slicing algorithms. 相似文献
2.
Torben Amtoft 《Information Processing Letters》2008,106(2):45-51
Slicing is a program transformation technique with numerous applications, as it allows the user to focus on the parts of a program that are relevant for a given purpose. Ideally, the slice program should have the same termination properties as the original program, but to keep the slices manageable, it might be preferable to slice away loops that do not affect the values of relevant variables. This paper provides the first theoretical foundation to reason about non-termination insensitive slicing without assuming the presence of a unique end node. A slice is required to be closed under data dependence and under a recently proposed variant of control dependence, called weak order dependence. This allows a simulation-based correctness proof for a correctness criterion stating that the observational behavior of the original program must be a prefix of the behavior of the slice program. 相似文献
3.
David Binkley Author VitaeAuthor Vitae Mark Harman Author Vitae Author Vitae Kiarash Mahdavi Author Vitae 《Journal of Systems and Software》2008,81(12):2287-2298
Programs express domain-level concepts in their source code. It might be expected that such concepts would have a degree of semantic cohesion. This cohesion ought to manifest itself in the dependence between statements all of which contribute to the computation of the same concept. This paper addresses a set of research questions that capture this informal observation. It presents the results of experiments on 10 programs that explore the relationship between domain-level concepts and dependence in source code. The results show that code associated with concepts has a greater degree of coherence, with tighter dependence. This finding has positive implications for the analysis of concepts as it provides an approach to decompose a program into smaller executable units, each of which captures the behaviour of the program with respect to a domain-level concept. 相似文献
4.
This paper proposes a type and effect system for analyzing activation flow between components through intents in Android programs. The activation flow information is necessary for all Android analyses such as a secure information flow analysis for Android programs. We first design a formal semantics for a core of featherweight Android/Java, which can address interaction between components through intents. Based on the formal semantics, we design a type and effect system for analyzing activation flow between components and demonstrate the soundness of the system. 相似文献
5.
Wes Masri 《Empirical Software Engineering》2008,13(4):369-399
Forward computing algorithms for dynamic slicing operate in tandem with program execution and generally do not require a previously
stored execution trace, which make them suitable for interactive debugging and online analysis of long running programs. Both
the time and space requirements of such algorithms are generally high due to the fact that they compute and maintain in memory
the dynamic slices associated with all variables defined during execution. In this paper we empirically identify several characteristics
of program dependences that we exploit to develop a memoization-based forward computing dynamic slicing algorithm whose runtime cost is better than that of any existing algorithm in its
class. We also conduct an empirical comparative study contrasting the performance of our new algorithm to the performance
of four other algorithms. One is a well known basic algorithm, and the remaining three, use reduced ordered binary decision diagrams (roBDDs) to maintain dynamic slices. Our results indicate that the new memoization-based algorithm is: (1) considerably more time
and space efficient than the basic algorithm and one of the roBDD-based algorithms designed to be suitable for online analysis;
and (2) comparable in terms of time efficiency but consistently more space efficient than the remaining two roBDD-based algorithms.
Wes Masri is an Assistant Professor at the Computer Science Department of the American University of Beirut. His primary research interest is in program analysis and its applications to software testing, debugging and security. He received his Ph.D. in Computer Engineering from Case Western Reserve University in 2004, his M.S. in Electrical Engineering from Penn State in 1988 and B.S. in Electrical Engineering also from Case Western Reserve University in 1986. He also spent over fifteen years in the U.S. software industry primarily as a software architect and developer. Some of the industries he was involved in include: medical imaging, middleware, telecom, genomics, semiconductor, and financial. He is a member of the IEEE Computer Society and the ACM. 相似文献
Wes MasriEmail: |
Wes Masri is an Assistant Professor at the Computer Science Department of the American University of Beirut. His primary research interest is in program analysis and its applications to software testing, debugging and security. He received his Ph.D. in Computer Engineering from Case Western Reserve University in 2004, his M.S. in Electrical Engineering from Penn State in 1988 and B.S. in Electrical Engineering also from Case Western Reserve University in 1986. He also spent over fifteen years in the U.S. software industry primarily as a software architect and developer. Some of the industries he was involved in include: medical imaging, middleware, telecom, genomics, semiconductor, and financial. He is a member of the IEEE Computer Society and the ACM. 相似文献
6.
The program dependence graph (PDG) is being used in research projects for compilation to parallel architectures, program version integration and program semantics. This paper describes the methods used in a prototype Fortran-to-PDG translator called the PDG Testbed. Implementation decisions and details of the PDG Testbed project are described as a complement to the formal papers detailing the abstract PDG. In addition, experimental results are given that show the storage consumption for a PDG relative to a conventional internal representation as well as execution times for several analysis and optimization steps. 相似文献
7.
An integrated crosscutting concern migration strategy and its semi-automated application to JHotDraw
Marius Marin Arie van Deursen Leon Moonen Robin van der Rijst 《Automated Software Engineering》2009,16(2):323-356
In this paper we propose a systematic strategy for migrating crosscutting concerns in existing object-oriented systems to
aspect-oriented programming solutions. The proposed strategy consists of four steps: mining, exploration, documentation and
refactoring of crosscutting concerns. We discuss in detail a new approach to refactoring to aspect-oriented programming that
is fully integrated with our strategy, and apply the whole strategy to an object-oriented system, namely the JHotDraw framework.
Moreover, we present a method to semi-automatically perform the aspect-introducing refactorings based on identified crosscutting
concern sorts which is supported by a prototype tool called sair. We perform an exploratory case study in which we apply this tool on the same object-oriented system and compare its results
with the results of manual migration in order to assess the feasibility of automated aspect refactoring. Both the refactoring
tool sair and the results of the manual migration are made available as open-source, the latter providing the largest aspect-introducing
refactoring available to date.
We report on our experiences with conducting both case studies and reflect on the success and challenges of the migration
process.
M. Marin is a guest at Delft University of Technology. 相似文献
8.
Owen Funkhouser Letha Hughes Etzkorn William E. Hughes Jr. 《Software Quality Journal》2008,16(1):131-156
This research involves a methodology and associated proof of concept tool to partially automate software validation by comparing
UML use cases with particular execution scenarios in source code. These execution scenarios are represented as the internal
documentation (identifier names and comments) associated with sequences of execution in static call graphs. This methodology
has the potential to reduce validation time and associated costs in many organizations, by enabling quick and easy validation
of software relative to the use cases that describe the requirements. The proof of concept tool as it currently stands is
intended as an aid to an IV&V software engineer, to assist in directing the software validation process. The approach is lightweight
and easily implemented.
相似文献
William E. Hughes Jr.Email: |
9.
Theories, tools and research methods in program comprehension: past, present and future 总被引:1,自引:0,他引:1
Margaret-Anne Storey 《Software Quality Journal》2006,14(3):187-208
Program comprehension research can be characterized by both the theories that provide rich explanations about how programmers
understand software, as well as the tools that are used to assist in comprehension tasks. In this paper, I review some of
the key cognitive theories of program comprehension that have emerged over the past thirty years. Using these theories as
a canvas, I then explore how tools that are commonly used today have evolved to support program comprehension. Specifically,
I discuss how the theories and tools are related and reflect on the research methods that were used to construct the theories
and evaluate the tools. The reviewed theories and tools are distinguished according to human characteristics, program characteristics,
and the context for the various comprehension tasks. Finally, I predict how these characteristics will change in the future
and speculate on how a number of important research directions could lead to improvements in program comprehension tool development
and research methods.
Dr. Margaret-Anne
Storey is an associate professor of computer science at the University of Victoria, a Visiting Scientist at the IBM Centre for Advanced
Studies in Toronto and a Canada Research Chair in Human Computer Interaction for Software Engineering. Her research passion
is to understand how technology can help people explore, understand and share complex information and knowledge. She applies
and evaluates techniques from knowledge engineering and visual interface design to applications such as reverse engineering
of legacy software, medical ontology development, digital image management and learning in web-based environments. She is
also an educator and enjoys the challenges of teaching programming to novice programmers. 相似文献
10.
Karel Veselý 《国际通用系统杂志》2013,42(2):197-216
Real systems can include two types of state variables – dynamic and static. While dynamic state variables are a common part of each system, static variables are not and their presence in a system may cause some problems if standard system theories are used. In this paper, it is shown that, due to a new system theory (NST), it is possible to work correctly with systems and subsystems which include not only dynamic state variables, but also static state variables. If standard system theories are used, static variables in the real system cause not only problems in describing systems but also some challenges in control theory. These challenges involve, for example, some questions of controllability, reachability, or observability of a plant that includes static variables or the optimal control design of a plant that includes statical state variables. Some of the challenges mentioned are addressed in this paper after a brief introduction of the NST. 相似文献
11.
12.
Data dependence and its application to parallel processing 总被引:8,自引:0,他引:8
Data dependence testing is required to detect parallelism in programs. In this paper data dependence concepts and data dependence direction vectors are reviewed. Data dependence computation in parallel and vector constructs as well as serialdo loops is covered. Several transformations that require data dependence are given as examples, such as vectorization (translating serial code into vector code), concurrentization (translating serial code into concurrent code for multiprocessors), scalarization (translating vector or concurrent code into serial code for a scalar uniprocessor), loop interchanging and loop fusion. The details of data dependence testing including several data dependence decision algorithms are given. 相似文献
13.
14.
As UML 2.0 is evolving into a family of languages with individually specified semantics, there is an increasing need for automated and provenly correct model transformations that (i) assure the integration of local views (different diagrams) of the system into a consistent global view, and, (ii) provide a well-founded mapping from UML models to different semantic domains (Petri nets, Kripke automaton, process algebras, etc.) for formal analysis purposes as foreseen, for instance, in submissions for the OMG RFP for Schedulability, Performance and Time. However, such transformations into different semantic domains typically require the deep understanding of the underlying mathematics, which hinders the use of formal specification techniques in industrial applications. In the paper, we propose a multilevel metamodeling technique with precise static and dynamic semantics (based on a refinement calculus and graph transformation) where the structure and operational semantics of mathematical models can be defined in a UML notation without cumbersome mathematical formulae. 相似文献
15.
16.
Database applications are becoming increasingly popular, mainly due to the advanced data management facilities that the underlying database management system offers compared against traditional legacy software applications. The interaction, however, of such applications with the database system introduces a number of issues, among which, this paper addresses the impact analysis of the changes performed at the database schema level. Our motivation is to provide the software engineers of database applications with automated methods that facilitate major maintenance tasks, such as source code corrections and regression testing, which should be triggered by the occurrence of such changes. The presented impact analysis is thus two-folded: the impact is analysed in terms of both the affected source code statements and the affected test suites concerning the testing of these applications. To achieve the former objective, a program slicing technique is employed, which is based on an extended version of the program dependency graph. The latter objective requires the analysis of test suites generated for database applications, which is accomplished by employing testing techniques tailored for this type of applications. Utilising both the slicing and the testing techniques enhances program comprehension of database applications, while also supporting the development of a number of practical metrics regarding their maintainability against schema changes. To evaluate the feasibility and effectiveness of the presented techniques and metrics, a software tool, called DATA, has been implemented. The experimental results from its usage on the TPC-C case study are reported and analysed. 相似文献
17.
We present a method for profiling programs that are written using domain-specific languages. Instead of reporting execution in terms of implementation details as in most existing profilers, our method operates at the level of the problem domain. Program execution generates a stream of events that summarises the execution in terms of domain concepts and operations. The events enable us to construct a hierarchical model of the execution. A flexible reporting system summarises the execution along developer-chosen model dimensions. The result is a flexible way for a developer to explore the execution of their program without requiring any knowledge of the domain-specific language implementation.These ideas are embodied in a new profiling library called dsprofile that is independent of the problem domain so it has no specific knowledge of the data and operations that are being profiled. We illustrate the utility of dsprofile by using it to profile programs that are written using our Kiama language processing library. Specifically, we instrument Kiama's attribute grammar and term rewriting domain-specific languages to use dsprofile to generate events that report on attribute evaluation and rewrite rule application. Examples of typical language processing tasks show how domain-specific profiling can help to diagnose problems in Kiama-based programs without the developer needing to know anything about how Kiama is implemented. 相似文献
18.
In a graph theory model, clustering is the process of division of vertices into groups, with a higher density of edges within groups than between them. In this paper, we introduce a new clustering method for detecting such groups and use it to analyse some classic social networks. The new method has two distinguished features: non-binary hierarchical tree and the feature of overlapping clustering. A non-binary hierarchical tree is much smaller than the binary-trees constructed by most traditional methods and, therefore, it clearly highlights meaningful clusters which significantly reduces further manual efforts for cluster selections. The present method is tested by several bench mark data sets for which the community structure was known beforehand and the results indicate that it is a sensitive and accurate method for extracting community structure from social networks. 相似文献
19.
This paper describes our experience in capturing, using a formal specification language, a model of the knowledge-intensive domain of oceanic air traffic control. This model is intended to form part of the requirements specification for a decision support system for air traffic controllers. We give an overview of the methods we used in analysing the scope of the domain, choosing an appropriate formalism, developing a domain model, and validating the model in various ways. Central to the method was the development of a formal requirements engineering environment which provided automated tools for model validation and maintenance. 相似文献
20.
Steven D. Johnson Yanhong A. Liu Yuchen Zhang 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):211-223
A systematic transformation method based on incrementalization and value caching generalizes a broad family of program optimizations. It yields significant performance improvements in many program classes,
including iterative schemes that characterize hardware specifications. CACHET is an interactive incrementalization tool. Although incrementalization is highly structured and automatable, better results
are obtained through interaction, where the main task is to guide term rewriting based on data-specific identities. Incrementalization
specialized to iteration corresponds to strength reduction, a familiar program improvement technique. This correspondence is illustrated by the derivation of a hardware-efficient nonrestoring square-root algorithm, which has also served as an example of theorem prover-based implementation verification.
Published online: 9 October 2001
RID="*"
ID="*"S.D. Johnson supported, in part, by the National Science Foundation under grant MIP-9601358.
RID="**"
ID="**"Y.A. Liu supported in part by the National Science Foundation under grant CCR-9711253, the Office of Naval Research
under grant N00014-99-1-0132, and Motorola Inc. under a Motorola University Partnership in Research Grant.
RID="***"
ID="***"Y. Zhang is a student recipient of a Motorola University Partnership in Research Grant. 相似文献