首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distributed database performance is often unpredictable due to issues such as system complexity, network congestion, or imbalanced data distribution. These issues are difficult for users to assess in part due to the opaque mapping between declaratively specified queries and actual physical execution plans. Database developers currently must expend significant time and effort scanning log files to isolate and debug the root causes of performance issues. In response, we present Perfopticon, an interactive query profiling tool that enables rapid insight into common problems such as performance bottlenecks and data skew. Perfopticon combines interactive visualizations of (1) query plans, (2) overall query execution, (3) data flow among servers, and (4) execution traces. These views coordinate multiple levels of abstraction to enable detection, isolation, and understanding of performance issues. We evaluate our design choices through engagements with system developers, scientists, and students. We demonstrate that Perfopticon enables performance debugging for real‐world tasks.  相似文献   

2.
This paper presents an approach for the automated debugging of reactive and concurrent Java programs, combining model checking and runtime monitoring. Runtime monitoring is used to transform the Java execution traces into the input for the model checker, the purpose of which is twofold. First, it checks these execution traces against properties written in linear temporal logic (LTL), which represent desirable or undesirable behaviors. Second, it produces several execution traces for a single Java program by generating test inputs and exploring different schedulings in multithreaded programs. As state explosion is the main drawback to model checking, we propose two abstraction approaches to reduce the memory requirements when storing Java states. We also present the formal framework to clarify which kinds of LTL safety and liveness formulas can be correctly analysed with each abstraction for both finite and infinite program executions. A major advantage of our approach comes from the model checker, which stores the trace of each failed execution, allowing the programmer to replay these executions to locate the bugs. Our current implementation, the tool TJT, uses Spin as the model checker and the Java Debug Interface (JDI) for runtime monitoring. TJT is presented as an Eclipse plug-in and it has been successfully applied to debug complex public Java programs.  相似文献   

3.
Dynamic analysis through execution traces is frequently used to analyze the runtime behavior of software systems. However, tracing long running executions generates voluminous data, which are complicated to analyze and manage. Extracting interesting performance or correctness characteristics out of large traces of data from several processes and threads is a challenging task. Trace abstraction and visualization are potential solutions to alleviate this challenge. Several efforts have been made over the years in many subfields of computer science for trace data collection, maintenance, analysis, and visualization. Many analyses start with an inspection of an overview of the trace, before digging deeper and studying more focused and detailed data. These techniques are common and well supported in geographical information systems, automatically adjusting the level of details depending on the scale. However, most trace visualization tools operate at a single level of representation, which are not adequate to support multilevel analysis. Sophisticated techniques and heuristics are needed to address this problem. Multi‐scale (multilevel) visualization with support for zoom and focus operations is an effective way to enable this kind of analysis. Considerable research and several surveys are proposed in the literature in the field of trace visualization. However, multi‐scale visualization has yet received little attention. In this paper, we provide a survey and methodological structure for categorizing tools and techniques aiming at multi‐scale abstraction and visualization of execution trace data and discuss the requirements and challenges faced to be able to meet evolving user demands.  相似文献   

4.
Checking the correctness of software is a growing challenge. In this paper, we present a prototype implementation of Partial Order Trace Analyzer (POTA), a tool for checking execution traces of both message passing and shared memory programs using temporal logic. So far runtime verification tools have used the total order model of an execution trace, whereas POTA uses a partial order model. The partial order model enables us to capture possibly exponential number of interleavings and, in turn, this allows us to find bugs that are not found using a total order model. However, verification in partial order model suffers from the state explosion problem – the number of possible global states in a program increases exponentially with the number of processes.POTA employs an effective abstraction technique called computation slicing. A slice of a computation (execution trace) with respect to a predicate is the computation with the least number of global states that contains all global states of the original computation for which the predicate evaluates to true. The advantage of this technique is that, it mitigates the state explosion problem by reasoning only on the part of the global state space that is of interest. In POTA, we implement computing slicing algorithms for temporal logic predicates from a subset of CTL. The overall complexity of evaluating a predicate in this logic upon using computation slicing becomes polynomial in the number of processes compared to exponential without slicing.We illustrate the effectiveness of our techniques in POTA on several test cases such as the General Inter-ORB Protocol (GIOP)[18] and the primary secondary protocol[32]. POTA also contains a module that translates execution traces to Promela[16] (input language SPIN). This module enables us to compare our results on execution traces with SPIN. In some cases, we were able to verify traces with 250 processes compared to only 10 processes using SPIN.  相似文献   

5.
6.
相对于传统测试主要关注软件的肯定需求,安全性测试则主要关注软件的否定需求。基于威胁模型的软件安全性测试是从攻击者的角度对软件进行测试。使用UML顺序图对安全威胁进行建模,从威胁模型中导出消息序列,从消息序列中导出威胁行为轨迹。程序编码完成后,对代码进行插桩以记录程序运行时的方法调用和执行的轨迹。设计测试用例,执行插桩后的程序并记录程序运行时的执行轨迹,将记录的程序执行轨迹与模型中导出的威胁行为轨迹进行比较,以确定程序中是否存在违反安全策略的威胁行为。  相似文献   

7.
In this paper, we describe a framework able to support run-time adjustment and a posteriori analysis of business processes, which exploits the retrieval step of the Case-based Reasoning (CBR) methodology. In particular, our framework allows to retrieve traces of process execution similar to the current one. Moreover, it supports an automatic organization of the trace database content through the application of hierarchical clustering techniques. Results can provide help both to end users, in the process execution phase, and to process engineers, in (formal) process conformance evaluation and long term process schema redesign.Retrieval and clustering rely on a distance definition able to take into account temporal information in traces. This metric has outperformed simpler distance definitions in our experiments, which were conducted in a real-world application domain.  相似文献   

8.
We present a rich and highly dynamic technique for analyzing, visualizing, and exploring the execution traces of reactive systems. The two inputs are a designer’s inter-object scenario-based behavioral model, visually described using a UML2-compliant dialect of live sequence charts (LSC), and an execution trace of the system. Our method allows one to visualize, navigate through, and explore, the activation and progress of the scenarios as they “come to life” during execution. Thus, a concrete system’s runtime is recorded and viewed through abstractions provided by behavioral models used for its design, tying the visualization and exploration of system execution traces to model-driven engineering. We support both event-based and real-time-based tracing, and use details-on-demand mechanisms, multi-scaling grids, and gradient coloring methods. Novel model exploration techniques include semantics-based navigation, filtering, and trace comparison. The ideas are implemented and tested in a prototype tool called the Tracer.  相似文献   

9.
A smart object system encompasses the synergy between computationally augmented everyday objects and external applications. This paper presents a software framework for building smart object systems following a declarative programming approach centered around custom written documents that glue the smart objects together. More specifically, in the proposed framework, applications’ requirements and smart objects’ services are objectified through structured documents. A runtime infrastructure provides the spontaneous federation between smart objects and applications through structural type matching of these documents. There are three primary advantages of our approach: firstly, it allows developers to write applications in a generic way without prior knowledge of the smart objects that could be used by the applications. Secondly, smart object management (locating, accessing, etc.) issues are completely handled by the infrastructure; thus application development becomes rapid and simple. Finally, the programming abstraction used in the framework allows extension of functionalities of smart objects and applications very easily. We describe an implemented prototype of our framework and show examples of its use in a real life scenario to illustrate its feasibility.  相似文献   

10.
The Intelligent Accountability Middleware Architecture (Llama) project supports dependable service-oriented architecture (SOA) monitoring, runtime diagnosis, and reconfiguration. At its core, Llama implements an accountability service bus that users can install on existing service-deployment infrastructures. It collects and monitors service execution data from a key subset of services; enables Llama users to incorporate others' advanced diagnosis models and algorithms into the framework; and provides enterprise service bus extensions for collecting service profiling data, thus making process problems transparent to diagnose. Finally, experimental results indicate that using Llama contributes a modest amount of system overhead.  相似文献   

11.
We present an automated and configurable technique for runtime safety analysis of multithreaded programs that is able to predict safety violations from successful executions. Based on a formal specification of safety properties provided by a user, our technique enables us to automatically instrument a given program and create an observer so that the program emits relevant state update events to the observer and the observer checks these updates against the safety specification. The events are stamped with dynamic vector clocks, enabling the observer to infer a causal partial order on the state updates. All event traces that are consistent with this partial order, including the actual execution trace, are then analyzed online and in parallel. A warning is issued whenever one of these potential traces violates the specification. Our technique is scalable and can provide better coverage than conventional testing, but its coverage need not be exhaustive. In fact, one can trade off scalability and comprehensiveness: a window in the state space may be specified allowing the observer to infer some of the more likely runs; if the size of the window is 1, then only the actual execution trace is analyzed, as is the case in conventional testing; if the size of the window is ∞, then all the execution traces consistent with the actual execution trace are analyzed.  相似文献   

12.
Predictive analysis aims at detecting concurrency errors during runtime by monitoring a concrete execution trace of a concurrent program. In recent years, various models based on the happens-before causality relations have been proposed for predictive analysis. However, these models often rely on only the observed runtime events and typically do not utilize the program source code. Furthermore, the enumerative algorithms they use for verifying safety properties in the predicted traces often suffer from the interleaving explosion problem. In this paper, we introduce a precise predictive model based on both the program source code and the observed execution events, and propose a symbolic algorithm to check whether a safety property holds in all feasible permutations of events of the given trace. Rather than explicitly enumerating and checking the interleavings, our method conducts the search using a novel encoding and symbolic reasoning with a satisfiability modulo theory solver. We also propose a technique to bound the number of context switches allowed in the interleavings during the symbolic search, to further improve the scalability of the algorithm.  相似文献   

13.
We present ViewDF: a flexible and declarative framework for incremental maintenance of materialized views (i.e., results of continuous queries) over streaming data. The main component of the proposed framework is the View Delta Function (ViewDF), which declaratively specifies how to update a materialized view when a new batch of data arrives. We describe and experimentally evaluate a prototype system based on this idea, which allows users to write ViewDFs directly and automatically translates common classes of streaming queries into ViewDFs. Our approach generalizes existing work on incremental view maintenance and enables new optimizations for views that are common in stream analytics, including those with pattern matching and sliding windows.  相似文献   

14.
UCLA-SFINX is a neural network simulation environment that enables users to simulate a wide variety of neural network models at various levels of abstraction. A network specification language enables users to construct arbitrary network structures. Small, structurally irregular networks can be modeled by explicitly defining each neuron and can be modeled by explicitly defining each neuron and corresponding connections. Very large networks with regular connectivity patterns can be implicitly specified using array constructs. Graphics support, based on X Windows System, is provided to visualize simulation results. Details of the simulation environment are described, and simulation examples are presented to demonstrate SFINX's capabilities  相似文献   

15.
Svend E. Knudsen 《Software》2011,41(4):393-402
A simple programming abstraction based on the notion of independence is introduced as a means for mapping the independence inherent in an algorithm explicitly into its programmed solution. This enables a compiler and runtime system to exploit the independence and achieve efficient parallelism of execution on multicore processors. The constructs needed to express mutual independence among statements are proposed and their implementation in iOberon, an extension of the Active Oberon programming language, is defined. The programming language extensions, runtime support, and performance measurements are described in detail. We believe that this concept of specifying local disjoint program fragments can be applied to other programming languages. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Future programming environments will incorporate a tighter coupling between language runtime systems and the monitoring tools that are used to debug, tune, visualize, and understand them. Many innovations that are developed first in higher level programming language environments will migrate into mainstream languages once their properties are understood and generalized.

The Alamo execution monitor architecture was developed to facilitate rapid development of execution monitors, especially visualization tools that are instrumental in understanding complex runtime system interactions in higher level languages. Alamo simplifies the development of such tools by solving the low-level access, control, and intrusion problems inherent in monitoring.

Alamo was implemented first for the very high-level imperative goal-directed language Icon. The architecture was then implemented for ANSI C in order to broaden the impact of the work. This paper describes the ANSI C implementation of Alamo and the monitoring services it provides.  相似文献   


17.
Analyzing and understanding the performance behavior of parallel applications on parallel computing platforms is a long‐standing concern in the High Performance Computing community. When the targeted platforms are not available, simulation is a reasonable approach to obtain objective performance indicators and explore various hypothetical scenarios. In the context of applications implemented with the Message Passing Interface, two simulation methods have been proposed, on‐line simulation and off‐line simulation, both with their own drawbacks and advantages. In this work, we present an off‐line simulation framework, that is, one that simulates the execution of an application based on event traces obtained from an actual execution. The main novelty of this work, when compared to previously proposed off‐line simulators, is that traces that drive the simulation can be acquired on large, distributed, heterogeneous, and non‐dedicated platforms. As a result, the scalability of trace acquisition is increased, which is achieved by enforcing that traces contain no time‐related information. Moreover, our framework is based on a state‐of‐the‐art scalable, fast, and validated simulation kernel. We introduce the notion of performing off‐line simulation from time‐independent traces, propose and evaluate several trace acquisition strategies, describe our simulation framework, and assess its quality in terms of trace acquisition scalability, simulation accuracy, and simulation time. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
UML是一种标准的可视化建模工具,广泛应用于软件系统的描述、可视化、构建和建立文档。本文介绍了一种UMI。行为图驱动的Java程序运行时验证工具。该工具以一个随机的测试用例集作为输入,运行经过插装的被测Java程序,得到一组用于验证的程序运行轨迹。通过对程序运行轨迹和UML行为图中合法的事件序列的比较,该工具可以对程序的动态行为规约进行检查。本文描述了该工具的设计思想、算法和实现技术,并通过对实例研究对该工具的可用性和有效性进行了讨论。  相似文献   

19.
ContextA considerable portion of the software systems today are adopted in the embedded control domain. Embedded control software deals with controlling a physical system, and as such models of physical characteristics become part of the embedded control software.ObjectiveDue to the evolution of system properties and increasing complexity, faults can be left undetected in these models of physical characteristics. Therefore, their accuracy must be verified at runtime. Traditional runtime verification techniques that are based on states/events in software execution are inadequate in this case. The behavior suggested by models of physical characteristics cannot be mapped to behavioral properties of software. Moreover, implementation in a general-purpose programming language makes these models hard to locate and verify. Therefore, this paper proposes a novel approach to perform runtime verification of models of physical characteristics in embedded control software.MethodThe development of an approach for runtime verification of models of physical characteristics and the application of the approach to two industrial case studies from the printing systems domain.ResultsThis paper presents a novel approach to specify models of physical characteristics using a domain-specific language, to define monitors that detect inconsistencies by exploiting redundancy in these models, and to realize these monitors using an aspect-oriented approach. We complement runtime verification with static analysis to verify the composition of domain-specific models with the control software written in a general-purpose language.ConclusionsThe presented approach enables runtime verification of implemented models of physical characteristics to detect inconsistencies in these models, as well as broken hardware components and wear and tear of hardware in the physical system. The application of declarative aspect-oriented techniques to realize runtime verification monitors increases modularity and provides the ability to statically verify this realization. The complementary static and runtime verification techniques increase the reliability of embedded control software.  相似文献   

20.
Cache coherence in shared-memory multiprocessor systems has been studied mostly from an architecture viewpoint, often by means of aggregating metrics. In many cases, aggregate events provide insufficient information for programmers to understand and optimize the coherence behavior of their applications. A better understanding would be given by source code correlations of not only aggregate events, but also finer granularity metrics directly linked to high-level source code constructs, such as source lines and data structures. In this paper, we explore a novel application-centric approach to studying coherence traffic. We develop a coherence analysis framework based on incremental coherence simulation of actual reference traces. We provide tool support to extract these reference traces and synchronization information from OpenMP threads at runtime using dynamic binary rewriting of the application executable. These traces are fed to ccSIM, our cache-coherence simulator. The novelty of ccSIM lies in its ability to relate low-level cache coherence metrics (such as coherence misses and their causative invalidations) to high-level source code constructs including source code locations and data structures. We explore the degree of freedom in interleaving data traces from different processors and assess simulation accuracy in comparison to metrics obtained from hardware performance counters. Our quantitative results show that: 1) Cache coherence traffic can be simulated with a considerable degree of accuracy for SPMD programs, as the invalidation traffic closely matches the corresponding hardware performance counters. 2) Detailed, high-level coherence statistics are very useful in detecting, isolating, and understanding coherence bottlenecks. We use ccSIM with several well-known benchmarks and find coherence optimization opportunities leading to significant reductions in coherence traffic and savings in wall-clock execution time  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号