首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Intelligent data analysis applied to debug complex software systems   总被引:1,自引:0,他引:1  
Emilio  Jorge J.  Juan A.  Juan   《Neurocomputing》2009,72(13-15):2785
The emergent behavior of complex systems, which arises from the interaction of multiple entities, can be difficult to validate, especially when the number of entities or their relationships grows. This validation requires understanding of what happens inside the system. In the case of multi-agent systems, which are complex systems as well, this understanding requires analyzing and interpreting execution traces containing agent specific information, deducing how the entities relate to each other, guessing which acquaintances are being built, and how the total amount of data can be interpreted. The paper introduces some techniques which have been applied in developments made with an agent oriented methodology, INGENIAS, which provides a framework for modeling complex agent oriented systems. These techniques can be regarded as intelligent data analysis techniques, all of which are oriented towards providing simplified representations of the system. These techniques range from raw data visualization to clustering and extraction of association rules.  相似文献   

2.
Dynamic analysis through execution traces is frequently used to analyze the runtime behavior of software systems. However, tracing long running executions generates voluminous data, which are complicated to analyze and manage. Extracting interesting performance or correctness characteristics out of large traces of data from several processes and threads is a challenging task. Trace abstraction and visualization are potential solutions to alleviate this challenge. Several efforts have been made over the years in many subfields of computer science for trace data collection, maintenance, analysis, and visualization. Many analyses start with an inspection of an overview of the trace, before digging deeper and studying more focused and detailed data. These techniques are common and well supported in geographical information systems, automatically adjusting the level of details depending on the scale. However, most trace visualization tools operate at a single level of representation, which are not adequate to support multilevel analysis. Sophisticated techniques and heuristics are needed to address this problem. Multi‐scale (multilevel) visualization with support for zoom and focus operations is an effective way to enable this kind of analysis. Considerable research and several surveys are proposed in the literature in the field of trace visualization. However, multi‐scale visualization has yet received little attention. In this paper, we provide a survey and methodological structure for categorizing tools and techniques aiming at multi‐scale abstraction and visualization of execution trace data and discuss the requirements and challenges faced to be able to meet evolving user demands.  相似文献   

3.
Visual data analysis can be envisioned as a collaboration of the user and the computational system with the aim of completing a given task. Pursuing an effective system‐user integration, in which the system actively helps the user to reach his/her analysis goal has been focus of visualization research for quite some time. However, this problem is still largely unsolved. As a result, users might be overwhelmed by powerful but complex visual analysis systems which also limits their ability to produce insightful results. In this context, guidance is a promising step towards enabling an effective mixed‐initiative collaboration to promote the visual analysis. However, the way how guidance should be put into practice is still to be unravelled. Thus, we conducted a comprehensive literature research and provide an overview of how guidance is tackled by different approaches in visual analysis systems. We distinguish between guidance that is provided by the system to support the user, and guidance that is provided by the user to support the system. By identifying open problems, we highlight promising research directions and point to missing factors that are needed to enable the envisioned human‐computer collaboration, and thus, promote a more effective visual data analysis.  相似文献   

4.
This paper describes a set of tools that enables developers to log and analyze the run-time behavior of distributed control systems. A feature of the tools is that they can be applied to distributed systems. The logging tools enable developers to instrument C or C++ programs so that data indicating state changes can be logged automatically in a variety of formats. In particular, run-time data from distributed systems can be synchronized into a single relational database. Tools are also provided for visualizing the logged data. Analysis to verify correct program behavior is done using a new interval logic that is described in this paper. The logic enables system engineers to express temporal specifications for the autonomous control program that are then checked against the logged data. The data logging, visualization, and interval logic analysis tools are all fully implemented. Results are given from a NASA distributed autonomous control system application.  相似文献   

5.
Visualization of sentiments and opinions extracted from or annotated in texts has become a prominent topic of research over the last decade. From basic pie and bar charts used to illustrate customer reviews to extensive visual analytics systems involving novel representations, sentiment visualization techniques have evolved to deal with complex multidimensional data sets, including temporal, relational and geospatial aspects. This contribution presents a survey of sentiment visualization techniques based on a detailed categorization. We describe the background of sentiment analysis, introduce a categorization for sentiment visualization techniques that includes 7 groups with 35 categories in total, and discuss 132 techniques from peer‐reviewed publications together with an interactive web‐based survey browser. Finally, we discuss insights and opportunities for further research in sentiment visualization. We expect this survey to be useful for visualization researchers whose interests include sentiment or other aspects of text data as well as researchers and practitioners from other disciplines in search of efficient visualization techniques applicable to their tasks and data.  相似文献   

6.
This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state‐of‐the‐art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data. We classified these visualization techniques and identified two main categories: point‐based methods and methods based on areas of interest. Additionally, we conducted an expert review asking leading eye tracking experts how they apply visualization techniques in their analysis of eye tracking data. Based on the experts' feedback, we identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.  相似文献   

7.
This paper considers the problem of distributed motion- and task-planning of multi-agent and multi-agent-object systems under temporal-logic-based tasks and uncertain dynamics. We focus on manipulator-endowed robotic agents that can interact with their surroundings. We present first continuous control algorithms for multi-agent navigation and cooperative object manipulation that exhibit the following properties. First, they are distributed in the sense that each agent calculates its own control signal from local interaction with the other agents and the environment. Second, they guarantee safety properties in terms of inter-agent collision avoidance and obstacle avoidance. Third, they adapt on-the-fly to dynamic uncertainties and are robust to exogenous disturbances. The aforementioned algorithms allow the abstraction of the underlying system to a finite-state representation. Inspired by formal-verification techniques, we use such a representation to derive plans for the agents that satisfy the given temporal-logic tasks. Various simulation results and hardware experiments verify the efficiency of the proposed algorithms.  相似文献   

8.
Multivariate graphs are prolific across many fields, including transportation and neuroscience. A key task in graph analysis is the exploration of connectivity, to, for example, analyze how signals flow through neurons, or to explore how well different cities are connected by flights. While standard node‐link diagrams are helpful in judging connectivity, they do not scale to large networks. Adjacency matrices also do not scale to large networks and are only suitable to judge connectivity of adjacent nodes. A key approach to realize scalable graph visualization are queries: instead of displaying the whole network, only a relevant subset is shown. Query‐based techniques for analyzing connectivity in graphs, however, can also easily suffer from cluttering if the query result is big enough. To remedy this, we introduce techniques that provide an overview of the connectivity and reveal details on demand. We have two main contributions: (1) two novel visualization techniques that work in concert for summarizing graph connectivity; and (2) Graffinity, an open‐source implementation of these visualizations supplemented by detail views to enable a complete analysis workflow. Graffinity was designed in a close collaboration with neuroscientists and is optimized for connectomics data analysis, yet the technique is applicable across domains. We validate the connectivity overview and our open‐source tool with illustrative examples using flight and connectomics data.  相似文献   

9.
Neurotechnology has made great strides in the last 20 years. However, we still have a long way to go to commercialize many of these technologies as we lack a unified framework to study cyber-neural systems (CNS) that bring the hardware, software, and the neural system together. Dynamical systems play a key role in developing these technologies as they capture different aspects of the brain and provide insight into their function. Converging evidence suggests that fractional-order dynamical systems are advantageous in modeling neural systems because of their compact representation and accuracy in capturing the long-range memory exhibited in neural behavior. In this brief survey, we provide an overview of fractional CNS that entails fractional-order systems in the context of CNS. In particular, we introduce basic definitions required for the analysis and synthesis of fractional CNS, encompassing system identification, state estimation, and closed-loop control. Additionally, we provide an illustration of some applications in the context of CNS and draw some possible future research directions. Advancements in these three areas will be critical in developing the next generation of CNS, which will, ultimately, improve people’s quality of life.  相似文献   

10.
System noise or Jitter is the activity of hardware, firmware, operating system, runtime system, and management software events. It is shown to disproportionately impact application performance in current generation large-scale clustered systems running general-purpose operating systems (GPOS). Jitter mitigation techniques such as co-scheduling jitter events across operating systems improve application performance but their effectiveness on future petascale systems is unknown. To understand if existing jitter mitigation solutions enable scalable petascale performance, we construct two complementary jitter models based on detailed analysis of system noise from the nodes of a large-scale system running a GPOS. We validate these two models using experimental data from a system consisting of 256 GPOS instances with 8192 CPUs. Based on our models, we project a minimum slowdown of 1.8%, 4.1%, and 6.5% for applications executing on a similar one petaflop system running 1024 GPOS instances and having global synchronization operations once every 100 ms, 10 ms, and 1 ms, respectively. Our projections indicate that–although existing mitigation solutions enable scalable petascale performance–additional techniques are required to contain the impact of jitter on multi-petaflop systems, especially for tightly synchronized applications.  相似文献   

11.
Data reduction is increasingly being applied to scientific data for numerical simulations, scientific visualizations and data analyses. It is most often used to lower I/O and storage costs, and sometimes to lower in‐memory data size as well. With this paper, we consider five categories of data reduction techniques based on their information loss: (1) truly lossless, (2) near lossless, (3) lossy, (4) mesh reduction and (5) derived representations. We then survey available techniques in each of these categories, summarize their properties from a practical point of view and discuss relative merits within a category. We believe, in total, this work will enable simulation scientists and visualization/data analysis scientists to decide which data reduction techniques will be most helpful for their needs.  相似文献   

12.
Building visualization and analysis pipelines is a large hurdle in the adoption of visualization and workflow systems by domain scientists. In this paper, we propose techniques to help users construct pipelines by consensus--automatically suggesting completions based on a database of previously created pipelines. In particular, we compute correspondences between existing pipeline subgraphs from the database, and use these to predict sets of likely pipeline additions to a given partial pipeline. By presenting these predictions in a carefully designed interface, users can create visualizations and other data products more efficiently because they can augment their normal work patterns with the suggested completions. We present an implementation of our technique in a publicly-available, open-source scientific workflow system and demonstrate efficiency gains in real-world situations.  相似文献   

13.
Cluster analysis is a useful method which reveals underlying structures and relations of items after grouping them into clusters. In the case of temporal data, clusters are defined over time intervals where they usually exhibit structural changes. Conventional cluster analysis does not provide sufficient methods to analyze these structural changes, which are, however, crucial in the interpretation and evaluation of temporal clusters. In this paper, we present two novel and interactive visualization techniques that enable users to explore and interpret the structural changes of temporal clusters. We introduce the temporal cluster view, which visualizes the structural quality of a number of temporal clusters, and temporal signatures, which represents the structure of clusters over time. We discuss how these views are utilized to understand the temporal evolution of clusters. We evaluate the proposed techniques in the cluster analysis of mixed lipid bilayers.  相似文献   

14.
Many industrial products are based on the use of embedded computer systems. Usually, these systems have to fulfil real-time requirements, and correct system functionality depends on their logical correctness as well as on their temporal correctness. In order to verify the temporal behavior of real-time systems, previous scientific work has, to a large extent, concentrated on static analysis techniques. Although these techniques offer the possibilty of providing safe estimates of temporal behavior for certain cases, there are a number of cases in practice for which static analysis can not be easily applied. Furthermore, no commercial tools for timing analysis of real-world programs are available. Therefore, the developed systems have to be thoroughly tested in order to detect existing deficiencies in temporal behavior, as well as to strengthen the confidence in temporal correctness. An investigation of existing test methods shows that they mostly concentrate on testing for logical correctness. They are not specialised in the examination of temporal correctness which is also essential to real-time systems. For this reason, existing test procedures must be supplemented by new methods which concentrate on determining whether the system violates its specified timing constraints. Normally, a violation means that outputs are produced too early, or their computation takes too long. The task of the tester therefore is to find the input situations with the longest or shortest execution times, in order to check whether they produce a temporal error. If the search for such inputs is interpreted as a problem of optimization, evolutionary computation can be used to automatically find the inputs with the longest or shortest execution times. This automatic search for accurate test data by means of evolutionary computation is called evolutionary testing. Experiments using evolutionary testing on a number of programs with up to 1511 LOC and 5000 input parameters have successfully identified new longer and shorter execution times than had been found using other testing techniques. Evolutionary testing, therefore, seems to be a promising approach for the verification of timing constraints. A combination of evolutionary testing and systematic testing offers further opportunities to improve the test quality, and could lead to an effective test strategy for real-time systems.  相似文献   

15.
Machine learning (ML) models are nowadays used in complex applications in various domains, such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.  相似文献   

16.
Mike W.  Matthew C.   《Neurocomputing》2008,71(7-9):1462-1476
Multiple neural network systems have become popular techniques for tackling complex tasks, often giving improved performance compared to single network systems. For example, modular systems can provide improvements in generalisation through task decomposition, whereas multiple classifier and regressor systems typically improve generalisation through the ensemble combination of redundant networks. Whilst there has been significant focus on understanding the theoretical properties of some of these multi-net systems, particularly ensemble systems, there has been little theoretical work on understanding the properties of the generic combination of networks, important in developing more complex systems, perhaps even those a step closer to their biological counterparts. In this article, we provide a formal framework in which the generic combination of neural networks can be described, and in which the properties of the system can be rigorously analysed. We achieve this by describing multi-net systems in terms of partially ordered sets and state transition systems. By way of example, we explore an abstract version of learning applied to a generic multi-net system that can combine an arbitrary number of networks in sequence and in parallel. By using the framework we show with a constructive proof that, under specific conditions, if it is possible to train the generic system, then training can be achieved by the abstract technique described.  相似文献   

17.
A characteristic that many emerging technologies and interaction techniques have in common is a shift towards tighter coupling between human and computer. In addition to traditional discrete interaction, more continuous interaction techniques, such as gesture recognition, haptic feedback and animation, play an increasingly important role. Additionally, many supervisory control systems (such as flight deck systems) already have a strong continuous element. The complexity of these systems and the need for rigorous analysis of the human factors involved in their operation leads us to examine formal and possibly automated support for their analysis. The fact that these systems have important temporal aspects and potentially involve continuous variables, besides discrete events, motivates the application of hybrid systems modelling, which has the expressive power to encompass these issues. Essentially, we are concerned with human-factors related questions whose answers are dependent on interactions between the user and a complex, dynamic system.In this paper we explore the use of hybrid automata, a formalism for hybrid systems, for the specification and analysis of interactive systems. To illustrate the approach we apply it to the analysis of an existing flight deck instrument for monitoring and controlling the hydraulics subsystem.  相似文献   

18.
Data abstraction techniques are widely used in multiresolution visualization systems to reduce visual clutter and facilitate analysis from overview to detail. However, analysts are usually unaware of how well the abstracted data represent the original dataset, which can impact the reliability of results gleaned from the abstractions. In this paper, we define two data abstraction quality measures for computing the degree to which the abstraction conveys the original dataset: the histogram difference measure and the nearest neighbor measure. They have been integrated within XmdvTool, a public-domain multiresolution visualization system for multivariate data analysis that supports sampling as well as clustering to simplify data. Several interactive operations are provided, including adjusting the data abstraction level, changing selected regions, and setting the acceptable data abstraction quality level. Conducting these operations, analysts can select an optimal data abstraction level. Also, analysts can compare different abstraction methods using the measures to see how well relative data density and outliers are maintained, and then select an abstraction method that meets the requirement of their analytic tasks  相似文献   

19.
In this state-of-the-art report we discuss relevant research works related to the visualization of complex, multi-variate data. We discuss how different techniques take effect at specific stages of the visualization pipeline and how they apply to multi-variate data sets being composed of scalars, vectors and tensors. We also provide a categorization of these techniques with the aim for a better overview of related approaches. Based on this classification we highlight combinable and hybrid approaches and focus on techniques that potentially lead towards new directions in visualization research. In the second part of this paper we take a look at recent techniques that are useful for the visualization of complex data sets either because they are general purpose or because they can be adapted to specific problems.  相似文献   

20.
Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analysed and compared in different views. However, the semantics of such selections often depend on specific parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can also be used as the basis for interactive integrated and linked views, which enable in‐place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well‐defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data, the analysis of historical documents and the display of anatomical data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号