首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap memory snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that it is necessary to analyze object flows if we are to understand the runtime of an object-oriented application. We propose and detail object flow analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.  相似文献   

2.
ChARM is a simulation tool for tuning ARM-based embedded systems that include cache memories. ChARM provides a parametric, trace-driven simulation for tuning system configuration. A designer can observe performance while varying the timing, the architectural features, and the management policies of the system components. Designers can therefore evaluate the execution time of the program, the time spent in memory accesses, miss ratio, code miss ratio, and data miss ratio, and the number of burst-read operations. They can also evaluate the number of write operations for write-through cache models and burst-write operations for copy-back cache models. finally, ChARM's program locality analysis illustrates the sequentiality, temporality, and loops of a program in easy-to-read three dimensional graphs. These graphs, together with the graphs showing the distribution of the replacement conflicts in cache, help designers understand how a program works and how it stresses the memory hierarchy  相似文献   

3.
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.  相似文献   

4.
Programming faults are defined in the framework of the program verification schema (proof outline). Component S in program P is faulty if P cannot be proved correct with the current implementation of S but it can be proved using the implementation specification for S. A programming error is a state that violates that specification. Conditions for error propagation and masking are expressed in terms of the relationships between the implementation and design specification of S, which defines the role of S in the overall design of P. Errors propagate due to the dependencies between program entities. It is shown that “classical” static dependencies, developed for the purpose of code optimization, are inadequate for the analysis of error propagation since they do not capture events that occur on individual paths through the program. A novel path analysis method is proposed to identify variables potentially corrupted on a same path due the existence of the fault. The method is based upon error propagation axioms. The axioms are used to define path relations for structured programming constructs. The relations provide a conservative structural approximation to the semantical theory of error creation and propagation and are shown useful in testing, debugging and static analysis.  相似文献   

5.
To develop new compilation and optimization techniques, computer scientists frequently Consult program analysis artifacts such as Control flow graphs (CFGs) and traces of executed instructions. A CFG is a directed graph representing possible execution paths in a program. CFGs are commonly visualized as node‐link diagrams while traces are commonly viewed in raw text format. Visualizing and exploring CFGs and traces is challenging because of the complexity and specificity of the operations researchers perform. We present a design study where we collaborate with computer scientists researching dynamic binary analysis and compilation techniques. The research group primarily employs CFGs and traces to reason about and develop new algorithms for program optimization and parallelization. Through questionnaires, interviews, and a year‐long observation, we analyzed their use of visualization, noting that the tasks they perform match common subroutines they employ in their techniques. Based on this task analysis, we designed CFGExplorer, a visual analytics system that supports computer scientists with interactions that are integrated with the program structure. We developed a domain‐specific graph modification to generate graph layouts that reflect program structure. CFGExplorer incorporates structures such as functions and loops, and uses the correspondence between CFGs and traces to support navigation. We further augment the system to highlight the output of program analysis techniques, facilitating exploration at a higher level. We evaluate the tool through guided sessions and semi‐structured interviews as well as deployment. Our collaborators have integrated CFGExplorer into their workflow and use it to reason about programs, develop and debug new algorithms, and share their findings.  相似文献   

6.
Pipelining is now a standard technique for increasing the speed of computers, particularly for floating-point arithmetic. Single-chip, pipelined floating-point functional units are available as “off the shelf” components. Addressing arithmetic can be done concurrently with floating-point operations to construct a fast processor that can exploit fine-grain parallelism. This paper describes a metric to estimate the optimal execution time of DO loops on particular processors. This metric is parameterized by the memory bandwidth and peak floating-point rate of the processor, as well as the length of the pipelines used in the functional units. Data dependence analysis provides information about the execution order constraints of the operations in the DO loop and is used to estimate the amount of pipeline interlock required by a loop. Several transformations are investigated to determine their impact on loops under this metric.  相似文献   

7.
In many applications, the properties of an object being modeled are stored as labels on vertices or edges of a graph. In this paper, we consider succinct representation of labeled graphs. Our main results are the succinct representations of labeled and multi-labeled graphs (we consider planar triangulations, planar graphs and k-page graphs) to support various label queries efficiently. The additional space cost to store the labels is essentially the information-theoretic minimum. As far as we know, our representations are the first succinct representations of labeled graphs. We also have two preliminary results to achieve the main contribution. First, we design a succinct representation of unlabeled planar triangulations to support the rank/select of edges in ccw (counter clockwise) order in addition to the other operations supported in previous work. Second, we design a succinct representation for a k-page graph when k is large to support various navigational operations more efficiently. In particular, we can test the adjacency of two vertices in O(lg?k) time, while previous work uses O(k) time.  相似文献   

8.
Software birthmarks utilize certain specific program characteristics to validate the origin of software, so it can be applied to detect software piracy. One state-of-the-art technology on software birthmark adopts dynamic system call dependence graphs as the unique signature of a program, which cannot be cluttered by existing obfuscation techniques and is also immune to the no-ops system call insertion attack. In this paper, we analyze its weaknesses and construct replacement attacks with the help of semantics equivalent system calls to unlock the high frequency dependencies between the system calls in the victim’s original system call dependence graph. Our results show that the proposed replacement attacks can destroy the original birthmark successfully.  相似文献   

9.
A compositional method of constructing data dependency graphs for Ada programs is presented. These graphs are useful in a program development environment for analyzing data dependencies and tracking information flow within a program. Graphs for primitive program statements are combined together to form graphs for larger program units. Composition rules are described for iteration, recursion, exception handling, and tasking, as well as for simpler Ada constructs. The correctness of the construction and the practicality of the technique are discussed  相似文献   

10.
A dynamic program slice is an executable part of the program whose behaviour is identical, for the same program input, to that of the original program with respect to a variable(s) of interest at some execution position. The existing algorithms of dynamic slice computation use data and control dependencies to compute dynamic slices. These algorithms are limited to structured programs because they may compute incorrect dynamic slices for unstructured programs, due to the limitations of control dependencies that are used to compute dynamic slices. In this paper, we present a novel approach to dynamic slice computation for unstructured programs. The approach employs the notion of a removable block in finding dynamic program slices. Dynamic slices are derived by identifying not only those parts of program execution that contribute to the computation of the value of a variable of interest, but also those parts of program execution that do not contribute to the computation of the variable value. Data dependencies are used to identify contributing computations, whereas removable blocks are used to identify noncontributing computations. We have proved that the presented dynamic slicing algorithms correctly compute dynamic slices. In addition, these algorithms may compute more accurate dynamic slices compared to existing algorithms that use control dependencies. The presented algorithms have been implemented in a tool that supports dynamic slicing for Pascal programs  相似文献   

11.
This paper examines nonloop parallelism at both fine and coarse levels of granularity in ordinary Fortran programs. Dynamic self-scheduling algorithms are developed for acyclic task graphs containing both data- and control-dependences, along with the compiler optimizations necessary to make these practical. It is shown that practical algorithms based on atomic φ operations to a single shared variable, similar in spirit to dynamic loop dispatching algorithms, are possible for acyclic task graphs. Further, they generalize easily to loops. A key requirement is the use of compiler algorithms to optimize the task graphs. We show that although exact redundant dependence removal is theoretically NP-hard, the practical complexity on actual codes is small. Performance-related measurements are given to characterize the algorithms on a set of standard benchmark codes.  相似文献   

12.
死锁是并行程序常见的缺陷之一,动态死锁分析方法根据程序运行轨迹构建锁图、分段图等模型来检测死锁.然而,锁图及其现有的各种变型无法区分同一循环中锁授权语句的多次执行,扩展锁图中记录的锁集无法捕捉线程曾经持有而又随后释放的锁信息,分段图无法刻画锁的获取和释放操作与线程启动操作耦合而导致的段间依赖关系,上述问题导致了多种死锁的误报.为解决上述问题,本文对已有的锁图和分段图模型进行改进,在锁图基础上扩充语句的执行时序信息,在分段图的基础上扩充锁的获取和释放信息,对段进行更细粒度地划分以建模锁对象导致的段间依赖关系;最终,在上述锁增广分段图与时序增广锁图的基础上,提出一种新的死锁检测方法.所提方法能有效消除前述各种误报,从而提高死锁检测的准确率.文中开发相应的原型系统,并结合多个程序实例对所提方法的有效性进行评估验证.  相似文献   

13.
We propose a novel dynamic program slicing technique for concurrent object-oriented programs. Our technique uses a Concurrent System Dependence Graph (CSDG) as the intermediate program representation. We mark and unmark the edges in the CSDG appropriately as and when the dependencies arise and cease during run-time. We mark an edge when its associated dependence exists and unmark an edge when the dependence ceases to exist. Our approach eliminates the use of trace files. Another advantage of our approach is that when a request for a slice is made, it is already available. This appreciably reduces the response time of slicing commands.  相似文献   

14.
Many software engineering applications utilize static program analyses to gain information about programs. Some applications perform static analysis over the whole program's call graph, while others are more interested in specific call chains within a program's call graph. A particular static call chain for an object-oriented program may in fact be impossible to execute, or infeasible, such that there is no input for which the chain will be taken. Identifying infeasible static call chains can save time and resources with respect to the targeted software development tool. This paper examines type infeasibility of call chains, which may be caused by inherently polymorphic call sites and are sometimes due to imprecision in call graphs. The problem of determining whether a call chain is type infeasible is defined and exemplified, and a key property characterizing type infeasible call chains is described. An empirical study was performed on a set of Java programs, and results from examining the call graphs of these programs are presented. Finally, an algorithm that automatically determines the type infeasibility of a call chain due to object parameters is presented.  相似文献   

15.
Existing dynamic self-scheduling algorithms, used to schedule independent tasks on heterogeneous clusters, cannot handle tasks with dependencies because they lack the support for internode communication. To compensate for this deficiency we introduce a synchronization mechanism that provides inter-processor communication, thus, enabling self-scheduling algorithms to handle efficiently nested loops with dependencies. We also present a weighting mechanism that significantly improves the performance of dynamic self-scheduling algorithms. These algorithms divide the total number of tasks into chunks and assign them to processors. The weighting mechanism adapts the chunk sizes to the computing power and current run-queue state of the processors. The synchronization and weighting mechanisms are orthogonal, in the sense that they can simultaneously be applied to loops with dependencies. Thus, they broaden the application spectrum of dynamic self-scheduling algorithms and improve their performance. Extensive testing confirms the efficiency of the synchronization and weighting mechanisms and the significant improvement of the synchronized–weighted versions of the algorithms over the synchronized-only versions.  相似文献   

16.
As partner relationships become more dynamic and global boundaries give way to a more agile and dynamic environment, the ability to distribute one's processes in an agile manner becomes increasingly important. Such processes may need to be split not only along their explicit dependencies but also along more complex behavior such as recovery behavior and loops. The resulting process fragments can be distributed and wired together, recreating the execution semantics of the original process model. In earlier work, we presented BPEL fragmentation covering data and explicit control dependencies. We now extend the approach to handle fragmenting loops and scopes. Maintaining the focus on standards and maximizing extensibility of Web service runtimes and standards, the solution defines and uses two new coordination protocols that plug into the WS-Coordination framework. The approach uses the standards as much as is feasible and addresses the remaining required functionality by providing architected extensions. This results in layered approach that maximizes transparency and interoperability. After defining the fragmentation approach for scopes and loops, an implementation is presented that extends the Active Endpoints BPEL engine and a WS-Coordination system. A detailed example is used to illustrate how the protocols are used at runtime to enable the coordinator and the process fragments to recreate the behavior of the original, unsplit process model.  相似文献   

17.
Target Set Selection, which is a prominent NP-hard problem occurring in social network analysis and distributed computing, is notoriously hard both in terms of achieving useful polynomial-time approximation as well as fixed-parameter algorithms. Given an undirected graph, the task is to select a minimum number of vertices into a “target set” such that all other vertices will become active in the course of a dynamic process (which may go through several activation rounds). A vertex, equipped with a threshold value t, becomes active once at least t of its neighbors are active; initially, only the target set vertices are active. We contribute further insights into the existence of islands of tractability for Target Set Selection by spotting new parameterizations characterizing some sparse graphs as well as some “cliquish” graphs and developing corresponding fixed-parameter tractability and (parameterized) hardness results. In particular, we demonstrate that upper-bounding the thresholds by a constant may significantly alleviate the search for efficiently solvable, but still meaningful special cases of Target Set Selection.  相似文献   

18.
The author proposes a technique called control and definition modularization (CDM), which derives a systematic program layout from a given structure chart using the concepts of `control' and `definition' modules. A control module includes processes for handling a conceptual data object not directly implementable. A definition module defines operations associated with a concrete data object implementable using a primitive or derived data type of a programming language. Grouping the operations available for each concrete data object, and keeping them separated from execution flow, improves programs maintainability. This technique extends the structured design methodology and provides designers with a systematic way of deriving informational strength modules as well as a structured physical layout from the structure chart. A program based on the CDM technique is easier to understand and maintain. This research makes a significant contribution toward bridging the gap between structured design and object-oriented concepts  相似文献   

19.
This paper addresses the following supervisory problem: a continuous plant (P) is to be supervised via symbolic (or quantised) actions. These symbolic actions suggest the set points for the lower level control loops. The system dynamic is analysed on the supervisory level (K) by a qualitative approach. The relationships between variables and the steady-state references are known. These problems are especially common in chemical process control. The supervisor handles start-up and shut-down procedures and takes appropriate action to solve the sequential or parallel tasks of a basic procedure. The object of this paper is to introduce an approach to solving the problem of how to derive a set of rules from a physical process.The solutions for supervising start-up and shut-down operations in close loop are suitable for large industrial systems, as are as the batch and semi-continuous processes used in order to maintain operations in a dynamic mode. This paper considers the qualitative event-based expert supervision approach to distillation column problems. The development of a general supervision in this work is based on an events generator and a corrective actions generator. The qualitative symbols are based on fuzzy sets. In particular, there are mechanisms for processing the changes in the system variables from qualitative symbols.  相似文献   

20.
Slicing is a program transformation technique with numerous applications, as it allows the user to focus on the parts of a program that are relevant for a given purpose. Ideally, the slice program should have the same termination properties as the original program, but to keep the slices manageable, it might be preferable to slice away loops that do not affect the values of relevant variables. This paper provides the first theoretical foundation to reason about non-termination insensitive slicing without assuming the presence of a unique end node. A slice is required to be closed under data dependence and under a recently proposed variant of control dependence, called weak order dependence. This allows a simulation-based correctness proof for a correctness criterion stating that the observational behavior of the original program must be a prefix of the behavior of the slice program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号