首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
With newer complex multi-core systems, it is important to understand an application’s runtime behavior to be able to debug its execution, detect possible problems and bottlenecks and finally identify potential root causes. Execution traces usually contain precise data about an application execution. Their analysis and abstraction at multiple levels can provide valuable information and insights about an application’s runtime behavior. However, with multiple abstraction levels, it becomes increasingly difficult to find the exact location of detected performance or security problems. Tracing tools provide various analysis views to help users to understand their application problems. However, these pre-defined views are often not sufficient to reveal all analysis aspects of the underlying application. A declarative approach that enables users to specify and build their own custom analysis and views based on their knowledge, requirements and problems can be more useful and effective. In this paper, we propose a generic declarative trace analysis framework to analyze, comprehend and visualize execution traces. This enhanced framework builds custom analyses based on a specified modeled state, extracted from a system execution trace and stored in a special purpose database. The proposed solution enables users to first define their different analysis models based on their application and requirements, then visualize these models in many alternate representations (Gantt chart, XY chart, etc.), and finally filter the data to get some highlights or detect some potential patterns. Several sample applications with different operating systems are shown, using trace events gathered from Linux and Windows, at the kernel and user-space levels.  相似文献   

2.
Understanding the behavior of large scale distributed systems is generally extremely difficult as it requires to observe a very large number of components over very large time. Most analysis tools for distributed systems gather basic information such as individual processor or network utilization. Although scalable because of the data reduction techniques applied before the analysis, these tools are often insufficient to detect or fully understand anomalies in the dynamic behavior of resource utilization and their influence on the applications performance. In this paper, we propose a methodology for detecting resource usage anomalies in large scale distributed systems. The methodology relies on four functionalities: characterized trace collection, multi‐scale data aggregation, specifically tailored user interaction techniques, and visualization techniques. We show the efficiency of this approach through the analysis of simulations of the volunteer computing Berkeley Open Infrastructure for Network Computing architecture. Three scenarios are analyzed in this paper: analysis of the resource sharing mechanism, resource usage considering response time instead of throughput, and the evaluation of input file size on Berkeley Open Infrastructure for Network Computing architecture. The results show that our methodology enables to easily identify resource usage anomalies, such as unfair resource sharing, contention, moving network bottlenecks, and harmful short‐term resource sharing. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
Larus  J.R. 《Computer》1993,26(5):52-61
A program trace lists the addresses of instructions executed and data referenced during a program's execution. Earlier approaches to collecting program traces, including abstract execution and optimal control tracing, are reviewed. Two tracing systems based on these techniques are presented. Results collected when using the later systems on several programs show significant reductions in the cost of collecting traces. Reduction in trace file sizes are also significant  相似文献   

4.
In performance analysis of computer systems, trace-driven simulation techniques have the important advantage of credibility and accuracy. Unfortunately, traces are usually difficult to obtain, and little work has been done to provide efficient tools to help in the process of reducing and manipulating them. This paper presents TRAMP, a tool for the data reduction and data analysis phases of trace-driven simulation studies. TRAMP has three main advantages: it accepts a variety of common trace formats; it provides a programmable user interface in which many actions can be directly specified; and it is easy to extend. TRAMP is particularly helpful for reducing and analysing complex trace data, such as traces of file system or database activity. This paper presents the design principles and implementation techniques of TRAMP and provides a few concrete examples of the use of this tool.  相似文献   

5.
Intelligent data analysis applied to debug complex software systems   总被引:1,自引:0,他引:1  
Emilio  Jorge J.  Juan A.  Juan   《Neurocomputing》2009,72(13-15):2785
The emergent behavior of complex systems, which arises from the interaction of multiple entities, can be difficult to validate, especially when the number of entities or their relationships grows. This validation requires understanding of what happens inside the system. In the case of multi-agent systems, which are complex systems as well, this understanding requires analyzing and interpreting execution traces containing agent specific information, deducing how the entities relate to each other, guessing which acquaintances are being built, and how the total amount of data can be interpreted. The paper introduces some techniques which have been applied in developments made with an agent oriented methodology, INGENIAS, which provides a framework for modeling complex agent oriented systems. These techniques can be regarded as intelligent data analysis techniques, all of which are oriented towards providing simplified representations of the system. These techniques range from raw data visualization to clustering and extraction of association rules.  相似文献   

6.
We present a new method and tool for activity modelling through qualitative sequential data analysis. In particular, we address the question of constructing a symbolic abstract representation of an activity from an activity trace. We use knowledge engineering techniques to help the analyst build an ontology of the activity, that is, a set of symbols and hierarchical semantics that supports the construction of activity models. The ontology construction is pragmatic, evolutionist and driven by the analyst in accordance with their modelling goals and their research questions. Our tool helps the analyst define transformation rules to process the raw trace into abstract traces based on the ontology. The analyst visualizes the abstract traces and iteratively tests the ontology, the transformation rules and the visualization format to confirm the models of activity. With this tool and this method, we found innovative ways to represent a car‐driving activity at different levels of abstraction from activity traces collected from an instrumented vehicle. As examples, we report two new strategies of lane changing on motorways that we have found and modelled with this approach.  相似文献   

7.
An important part of many software maintenance tasks is to gain a sufficient level of understanding of the system at hand. The use of dynamic information to aid in this software understanding process is a common practice nowadays. A major issue in this context is scalability: due to the vast amounts of information, it is a very difficult task to successfully navigate through the dynamic data contained in execution traces without getting lost.In this paper, we propose the use of two novel trace visualization techniques based on the massive sequence and circular bundle view, which both reflect a strong emphasis on scalability. These techniques have been implemented in a tool called Extravis. By means of distinct usage scenarios that were conducted on three different software systems, we show how our approach is applicable in three typical program comprehension tasks: trace exploration, feature location, and top-down analysis with domain knowledge.  相似文献   

8.
Checking the correctness of software is a growing challenge. In this paper, we present a prototype implementation of Partial Order Trace Analyzer (POTA), a tool for checking execution traces of both message passing and shared memory programs using temporal logic. So far runtime verification tools have used the total order model of an execution trace, whereas POTA uses a partial order model. The partial order model enables us to capture possibly exponential number of interleavings and, in turn, this allows us to find bugs that are not found using a total order model. However, verification in partial order model suffers from the state explosion problem – the number of possible global states in a program increases exponentially with the number of processes.POTA employs an effective abstraction technique called computation slicing. A slice of a computation (execution trace) with respect to a predicate is the computation with the least number of global states that contains all global states of the original computation for which the predicate evaluates to true. The advantage of this technique is that, it mitigates the state explosion problem by reasoning only on the part of the global state space that is of interest. In POTA, we implement computing slicing algorithms for temporal logic predicates from a subset of CTL. The overall complexity of evaluating a predicate in this logic upon using computation slicing becomes polynomial in the number of processes compared to exponential without slicing.We illustrate the effectiveness of our techniques in POTA on several test cases such as the General Inter-ORB Protocol (GIOP)[18] and the primary secondary protocol[32]. POTA also contains a module that translates execution traces to Promela[16] (input language SPIN). This module enables us to compare our results on execution traces with SPIN. In some cases, we were able to verify traces with 250 processes compared to only 10 processes using SPIN.  相似文献   

9.
Execution trace logs are used to analyze system run‐time behaviour and detect problems. Trace analysis tools usually read the input logs and gather either a detailed or brief summary of them to later process and inspect in the analysis steps. However, continuous and lengthy trace streams contained in the live tracing mode make it difficult to indefinitely record all events or even a detailed summary of the whole stream. This situation is further complicated when the system aims to compare different parts of the trace and provide a multilevel and multidimensional analysis. This paper presents an architecture with corresponding data structures and algorithms to process stream events, generate an adequate summary—detailed enough for recent data and succinct enough for old data—and organize them to enable an efficient multilevel and multidimensional analysis, similar to online analytical processing analyses in the database applications. The proposed solution arranges data in a compact manner using interval forms and enables the range queries for any arbitrary time durations. Because this feature makes it possible to compare of different system parameters in different time areas, it significantly influences the system's ability to provide a comprehensive trace analysis. Although the Linux operating system trace logs are used to evaluate the solution, we propose a generic architecture that can be used to summarize various types of stream data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Among the many techniques in computer graphics, ray tracing is prized because it can render realistic images, albeit at great computational expense. Ray tracing's large computation requirements, coupled with its inherently parallel nature, make ray tracing algorithms attractive candidates for parallel implementation. In this paper we illustrate the utility and the importance of a suite of performance analysis tools when exploring the performance of several approaches to ray tracing on a distributed memory parallel system. These ray tracing algorithm variations introduce parallelism based on both ray and object partitions. Traditional timing analysis can quantify the performance effects of different algorithm choices (i.e. when an algorithm is best matched to a given problem), but it cannot identify the causes of these performance differences. We argue, by example, that a performance instrumentation system is needed that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary stapistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale, parallel computations; assimilating the enormous volume of performance data mandates visual display.  相似文献   

11.
Researchers and analysts in modern industrial and academic environments are faced with a daunting amount of multi‐dimensional data. While there has been significant development in the areas of data mining and knowledge discovery, there is still the need for improved visualizations and generic solutions. The state‐of‐the‐art in visual analytics and exploratory data visualization is to incorporate more profound analysis methods while focusing on fast interactive abilities. The common trend in these scenarios is to either visualize an abstraction of the data set or to better utilize screen‐space. This paper presents a novel technique that combines clustering, dimension reduction and multi‐dimensional data representation to form a multivariate data visualization that incorporates both detail and overview. This amalgamation counters the individual drawbacks of common projection and multi‐dimensional data visualization techniques, namely ambiguity and clutter. A specific clustering criterion is used to decompose a multi‐dimensional data set into a hierarchical tree structure. This decomposition is embedded in a novel Dimensional Anchor visualization through the use of a weighted linear dimension reduction technique. The resulting Structural Decomposition Tree (SDT) provides not only an insight of the data set's inherent structure, but also conveys detailed coordinate value information. Further, fast and intuitive interaction techniques are explored in order to guide the user in highlighting, brushing, and filtering of the data.  相似文献   

12.
A formal framework for the analysis of execution traces collected from distributed systems at run-time is presented. We introduce the notions of event and message traces to capture the consistency of causal dependencies between the elements of a trace. We formulate an approach to property testing where a partially ordered execution trace is modeled by a collection of communicating automata. We prove that the model exactly characterizes the causality relation between the events/messages in the observed trace and discuss the implementation of this approach in SDL, where ObjectGEODE is used to verify properties using model-checking techniques. Finally, we illustrate the approach with industrial case studies. Received May 2004, Revised February 2005, Accepted April 2005 by J. Derrick, M. Harman and R. M. Herons  相似文献   

13.
In this paper we propose a method for analysing and visualizing individual maps between shapes, or collections of such maps. Our method is based on isolating and highlighting areas where the maps induce significant distortion of a given measure in a multi‐scale way. Unlike the majority of prior work, which focuses on discovering maps in the context of shape matching, our main focus is on evaluating, analysing and visualizing a given map, and the distortion(s) it introduces, in an efficient and intuitive way. We are motivated primarily by the fact that most existing metrics for map evaluation are quadratic and expensive to compute in practice, and that current map visualization techniques are suitable primarily for global map understanding, and typically do not highlight areas where the map fails to meet certain quality criteria in a multi‐scale way. We propose to address these challenges in a unified way by considering the functional representation of a map, and performing spectral analysis on this representation. In particular, we propose a simple multi‐scale method for map evaluation and visualization, which provides detailed multi‐scale information about the distortion induced by a map, which can be used alongside existing global visualization techniques.  相似文献   

14.
Understanding the behavioural aspects of a software system can be made easier if efficient tool support is provided. Lately, there has been an increase in the number of tools for analysing execution traces. These tools, however, have different formats for representing execution traces, which hinders interoperability and limits reuse and sharing of data. To allow for better synergies among trace analysis tools, it would be beneficial to develop a standard format for exchanging traces. In this paper, we present a graph-based format, called compact trace format (CTF), which we hope will lead the way towards such a standard. CTF can model traces generated from a variety of programming languages, including both object-oriented and procedural ones. CTF is built with scalability in mind to overcome the vast size of most interesting traces. Indeed, the design of CTF is based on the idea that call trees can be transformed into more compact ordered acyclic directed graphs by representing similar subtrees only once. CTF is also supported by our trace analysis tool SEAT (Software Exploration and Analysis Tool).  相似文献   

15.
We present a rich and highly dynamic technique for analyzing, visualizing, and exploring the execution traces of reactive systems. The two inputs are a designer’s inter-object scenario-based behavioral model, visually described using a UML2-compliant dialect of live sequence charts (LSC), and an execution trace of the system. Our method allows one to visualize, navigate through, and explore, the activation and progress of the scenarios as they “come to life” during execution. Thus, a concrete system’s runtime is recorded and viewed through abstractions provided by behavioral models used for its design, tying the visualization and exploration of system execution traces to model-driven engineering. We support both event-based and real-time-based tracing, and use details-on-demand mechanisms, multi-scaling grids, and gradient coloring methods. Novel model exploration techniques include semantics-based navigation, filtering, and trace comparison. The ideas are implemented and tested in a prototype tool called the Tracer.  相似文献   

16.
Performance visualization tools of the past decade have yielded new insights into the behavior of sequential, parallel and distributed programs. However, they have three inherent limitations: (1) they only display what happened in one execution of a program (this is dangerous when analyzing concurrent applications, which are prone to non-deterministic behavior); (2) a human uses one or more bandwidth-limited senses with a visualization tool (this limits the scalability of a visualization tool); (3) the relationship of ‘interesting’ program events is often separated in time by other events; thus discerning time-dependent behavior often hinges on finding the ‘right’ visualization—a possibly time-consuming activity. CHITRA93 complements visualization systems, while alleviating these limitations, and analyzes a set (or ensemble) of traces by combining the visualization of a few traces with a statistical analysis of the entire ensemble (overcoming (1)). It reduces the ensemble to empirical models that capture the time-dependent relationships of ‘interesting’ program events through application, programming language and computer architecture independent analysis techniques (addressing (2) and (3)). It also incorporates the following transforms, such as aggregation, that simplify the ensemble and reduce the state-space size of the models generated; a user interface that allows certain transforms to be selected by editing the visualization with a mouse; homogeneity tests that allow partitioning of an ensemble; an efficient semi-Markov model generation algorithm whose computation time is linear in the sum of the lengths of the traces comprising the ensemble; and a CHAID-based model that can fathom non-Markovian relationships among transitions in the traces. The use of CHITRA93 is demonstrated by partitioning ten parallel database traces with nearly 8,000 states into two homogeneous subsets, each modeled by an irreducible, periodic and hierarchical stochastic process with as few as four states.  相似文献   

17.
Modelling relationship between entities in real‐world systems with a simple graph is a standard approach. However, reality is better embraced as several interdependent subsystems (or layers). Recently, the concept of a multilayer network model has emerged from the field of complex systems. This model can be applied to a wide range of real‐world data sets. Examples of multilayer networks can be found in the domains of life sciences, sociology, digital humanities and more. Within the domain of graph visualization, there are many systems which visualize data sets having many characteristics of multilayer graphs. This report provides a state of the art and a structured analysis of contemporary multilayer network visualization, not only for researchers in visualization, but also for those who aim to visualize multilayer networks in the domain of complex systems, as well as those developing systems across application domains. We have explored the visualization literature to survey visualization techniques suitable for multilayer graph visualization, as well as tools, tasks and analytic techniques from within application domains. This report also identifies the outstanding challenges for multilayer graph visualization and suggests future research directions for addressing them.  相似文献   

18.
如何从海量数据中快速有效地挖掘出有价值的信息以更好地指导决策,是大数据分析的重要目标.可视分析是一种重要的大数据分析方法,它利用人类视觉感知特性,使用可视化图表直观呈现复杂数据中蕴含的规律,并支持以人为本的交互式数据分析.然而,可视分析仍然面临着许多挑战,例如数据准备代价高、交互响应高延迟、可视分析高门槛和交互模式效率低.为应对这些挑战,研究者从数据管理、人工智能等视角出发,提出一系列方法以优化可视分析系统的人机协作模式和提高系统的智能化程度.系统性地梳理、分析和总结这些方法,提出智能数据可视分析的基本概念和关键技术框架.然后,在该框架下,综述和分析国内外面向可视分析的数据准备、智能数据可视化、高效可视分析和智能可视分析接口的研究进展.最后,展望智能数据可视分析的未来发展趋势.  相似文献   

19.
陈梓浩  徐辰  钱卫宁  周傲英 《软件学报》2023,34(3):1236-1258
在大数据治理应用中,数据分析是必不可少的一环,且具有耗时长、计算资源需求大的特点,因此,优化其执行效率至关重要.早期由于数据规模不大,数据分析师可以利用传统的矩阵计算工具执行分析算法,然而随着数据量的爆炸式增长,诸如MATLAB等传统工具已无法满足应用需求的执行效率,进而涌现出了一批面向大数据分析的分布式矩阵计算系统.从技术、系统等角度综述了分布式矩阵计算系统的研究进展.首先,从发展成熟的数据管理领域的视角出发,剖析分布式矩阵计算系统在编程接口、编译优化、执行引擎、数据存储这4个层面面临的挑战;其次,分别就这4个层面展开,探讨、总结相关技术;最后,总体分析了典型的分布式矩阵计算系统,并展望了未来研究的发展方向.  相似文献   

20.
The optimization of logistics in large building complexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified assumptions and therefore do not properly scale or provide realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial features, include methods for quantification of area densities, as well as flows between specified locations, buildings or departments, classified according to the feature set. Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the proposed methods and visualization tools, we present facility utilization analysis results for a large hospital complex covering more than 10 hectares. The evaluation is based on WiFi traces collected in the hospital’s WiFi infrastructure over two weeks observing around 18000 different devices recording more than a billion individual WiFi measurements. We highlight the tools’ ability to deduce people’s presences and movements and how they can provide respective insights into the test-bed hospital by investigating utilization patterns globally as well as selectively, e.g. for different user roles, daytimes, spatial granularities or focus areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号