首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The visualization community has developed to date many intuitions and understandings of how to judge the quality of views in visualizing data. The computation of a visualization's quality and usefulness ranges from measuring clutter and overlap, up to the existence and perception of specific (visual) patterns. This survey attempts to report, categorize and unify the diverse understandings and aims to establish a common vocabulary that will enable a wide audience to understand their differences and subtleties. For this purpose, we present a commonly applicable quality metric formalization that should detail and relate all constituting parts of a quality metric. We organize our corpus of reviewed research papers along the data types established in the information visualization community: multi‐ and high‐dimensional, relational, sequential, geospatial and text data. For each data type, we select the visualization subdomains in which quality metrics are an active research field and report their findings, reason on the underlying concepts, describe goals and outline the constraints and requirements. One central goal of this survey is to provide guidance on future research opportunities for the field and outline how different visualization communities could benefit from each other by applying or transferring knowledge to their respective subdomain. Additionally, we aim to motivate the visualization community to compare computed measures to the perception of humans.  相似文献   

2.
3.
Information visualization as a field is growing rapidly in popularity since the first information visualization conference in 1995. However, as a consequence of its growth, it is increasingly difficult to follow the growing body of literature within the field. Survey papers and literature reviews are valuable tools for managing the great volume of previously published research papers, and the quantity of survey papers in visualization has reached a critical mass. To this end, this survey paper takes a quantum step forward by surveying and classifying literature survey papers in order to help researchers understand the current landscape of Information Visualization. It is, to our knowledge, the first survey of survey papers (SoS) in Information Visualization. This paper classifies survey papers into natural topic clusters which enables readers to find relevant literature and develops the first classification of classifications. The paper also enables researchers to identify both mature and less developed research directions as well as identify future directions. It is a valuable resource for both newcomers and experienced researchers in and outside the field of Information Visualization and Visual Analytics.  相似文献   

4.
The purpose of multi‐run simulations is often to capture the variability of the output with respect to different initial settings. Comparative analysis of multi‐run spatio‐temporal simulation data requires us to investigate the differences in the dynamics of the simulations' changes over time. To capture the changes and differences, aggregated statistical information may often be insufficient, and it is desirable to capture the local differences between spatial data fields at different times and between different runs. To calculate the pairwise similarity between data fields, we generalize the concept of isosurface similarity from individual surfaces to entire fields and propose efficient computation strategies. The described approach can be applied considering a single scalar field for all simulation runs or can be generalized to a similarity measure capturing all data fields of a multi‐field data set simultaneously. Given the field similarity, we use multi‐dimensional scaling approaches to visualize the similarity in two‐dimensional or three‐dimensional projected views as well as plotting one‐dimensional similarity projections over time. Each simulation run is depicted as a polyline within the similarity maps. The overall visual analysis concept can be applied using our proposed field similarity or any other existing measure for field similarity. We evaluate our measure in comparison to popular existing measures for different configurations and discuss their advantages and limitations. We apply them to generate similarity maps for real‐world data sets within the overall concept for comparative visualization of multi‐run spatio‐temporal data and discuss the results.  相似文献   

5.
A multiplex network has links of different types, allowing it to express many overlapping types of relationships. A core task in network analysis is to evaluate and understand group cohesion; that is, to explain why groups of elements belong together based on the underlying structure of the network. We present Detangler, a system that supports visual analysis of group cohesion in multiplex networks through dual linked views. These views feature new data abstractions derived from the original multiplex network: the substrate network and the catalyst network. We contribute two novel techniques that allow the user to analyze the complex structure of the multiplex network without the extreme visual clutter that would result from simply showing it directly. The harmonized layout visual encoding technique provides spatial stability between the substrate and catalyst views. The pivot brushing interaction technique supports linked highlighting between the views based on computations in the underlying multiplex network to leapfrog between subsets of catalysts and substrates. We present results from the motivating application domain of annotated news documents with a usage scenario and preliminary expert feedback. A second usage scenario presents group cohesion analysis of the social network of the early American independence movement.  相似文献   

6.
7.
This paper addresses the increasing demand in industry for methods to analyze and visualize multimodal data involving a spectral modality. Two data modalities are used: high‐resolution X‐ray computed tomography (XCT) for structural characterization and low‐resolution X‐ray fluorescence (XRF) spectral data for elemental decomposition. We present InSpectr, an integrated tool for the interactive exploration and visual analysis of multimodal, multiscalar data. The tool has been designed around a set of tasks identified by domain experts in the fields of XCT and XRF. It supports registered single scalar and spectral datasets optionally coupled with element maps and reference spectra. InSpectr is instantiating various linked views for the integration of spatial and non‐spatial information to provide insight into an industrial component's structural and material composition: views with volume renderings of composite and individual 3D element maps visualize global material composition; transfer functions defined directly on the spectral data and overlaid pie‐chart glyphs show elemental composition in 2D slice‐views; a representative aggregated spectrum and spectra density histograms are introduced to provide a global overview in the spectral view. Spectral magic lenses, spectrum probing and elemental composition probing of points using a pie‐chart view and a periodic table view aid the local material composition analysis. Two datasets are investigated to outline the usefulness of the presented techniques: a 3D virtually created phantom with a brass metal alloy and a real‐world 2D water phantom with insertions of gold, barium, and gadolinium. Additionally a detailed user evaluation of the results is provided.  相似文献   

8.
We introduce an approach for explicitly revealing changes between versions of a visualization workbook to support version comparison tasks. Visualization authors may need to understand version changes for a variety of reasons, analogous to document editing. An author who has been away for a while may need to catch up on the changes made by their co‐author, or a person responsible for formatting compliance may need to check formatting changes that occurred since the last time they reviewed the work. We introduce ChangeCatcher, a prototype tool to help people find and understand changes in a visualization workbook, specifically, a Tableau workbook. Our design is based on interviews we conducted with experts to investigate user needs and practices around version comparison. ChangeCatcher provides an overview of changes across six categories, and employs a multi‐level details‐on‐demand approach to progressively reveal details. Our qualitative study showed that ChangeCatcher's methods for explicitly revealing and categorizing version changes were helpful in version comparison tasks.  相似文献   

9.
In this report, we review the current state of the art of web‐based visualization applications. Recently, an increasing number of web‐based visualization applications have emerged. This is due to the fact that new technologies offered by modern browsers greatly increased the capabilities for visualizations on the web. We first review these technical aspects that are enabling this development. This includes not only improvements for local rendering like WebGL and HTML5, but also infrastructures like grid or cloud computing platforms. Another important factor is the transfer of data between the server and the client. Therefore, we also discuss advances in this field, for example methods to reduce bandwidth requirements like compression and other optimizations such as progressive rendering and streaming. After establishing these technical foundations, we review existing web‐based visualization applications and prototypes from various application domains. Furthermore, we propose a classification of these web‐based applications based on the technologies and algorithms they employ. Finally, we also discuss promising application areas that would benefit from web‐based visualization and assess their feasibility based on the existing approaches.  相似文献   

10.
Given a set of rectangles embedded in the plane, we consider the problem of adjusting the layout to remove all overlap while preserving the orthogonal order of the rectangles. The objective is to minimize the displacement of the rectangles. We call this problem Minimum -Displacement Overlap Removal (mdor ). Our interest in this problem is motivated by the application of displaying metadata of archaeological sites. Because most existing overlap removal algorithms are not designed to minimize displacement while preserving orthogonal order, we present and compare several approaches which are tailored to our particular usecase. We introduce a new overlap removal heuristic which we call re Arrange . Although conceptually simple, it is very effective in removing the overlap while keeping the displacement small. Furthermore, we propose an additional procedure to repair the orthogonal order after every iteration, with which we extend both our new heuristic and PRISM, a widely used overlap removal algorithm. We compare the performance of both approaches with and without this order repair method. The experimental results indicate that re Arrange is very effective for heterogeneous input data where the overlap is concentrated in few dense regions.  相似文献   

11.
Scientific data acquired through sensors which monitor natural phenomena, as well as simulation data that imitate time‐identified events, have fueled the need for interactive techniques to successfully analyze and understand trends and patterns across space and time. We present a novel interactive visualization technique that fuses ground truth measurements with simulation results in real‐time to support the continuous tracking and analysis of spatiotemporal patterns. We start by constructing a reference model which densely represents the expected temporal behavior, and then use GPU parallelism to advect measurements on the model and track their location at any given point in time. Our results show that users can interactively fill the spatio‐temporal gaps in real world observations, and generate animations that accurately describe physical phenomena.  相似文献   

12.
To understand how the immune system works, one needs to have a clear picture of its cellular compositon and the cells' corresponding properties and functionality. Mass cytometry is a novel technique to determine the properties of single‐cells with unprecedented detail. This amount of detail allows for much finer differentiation but also comes at the cost of more complex analysis. In this work, we present Cytosplore, implementing an interactive workflow to analyze mass cytometry data in an integrated system, providing multiple linked views, showing different levels of detail and enabling the rapid definition of known and unknown cell types. Cytosplore handles millions of cells, each represented as a high‐dimensional data point, facilitates hypothesis generation and confirmation, and provides a significant speed up of the current workflow. We show the effectiveness of Cytosplore in a case study evaluation.  相似文献   

13.
14.
We present the first visualization tool that enables a comparative depiction of structural stress tensor data for vessel walls of cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. Medical researchers emphasize the importance of analyzing the interaction of morphological and hemodynamic information for the patient‐specific rupture risk evaluation and treatment analysis. Tensor data such as the stress inside the aneurysm walls characterizes the interplay between the morphology and blood flow and seems to be an important rupture‐prone criterion. We use different glyph‐based techniques to depict local stress tensors simultaneously and compare their applicability to cerebral aneurysms in a user study. We thus offer medical researchers an effective visual exploration tool to assess the aneurysm rupture risk. We developed a GPU‐based implementation of our techniques with a flexible interactive data exploration mechanism. Our depictions are designed in collaboration with domain experts, and we provide details about the evaluation.  相似文献   

15.
Interaction is critical to effective visualization, but can be difficult to author and debug due to dependencies among input events, program state, and visual output. Recent advances leverage reactive semantics to support declarative design and avoid the “spaghetti code” of imperative event handlers. While reactive programming improves many aspects of development, textual specifications still fail to convey the complex runtime dynamics. In response, we contribute a set of visual debugging techniques to reveal the runtime behavior of reactive visualizations. A timeline view records input events and dynamic variable updates, allowing designers to replay and inspect the propagation of values step‐by‐step. On‐demand annotations overlay the output visualization to expose relevant state and scale mappings in‐situ. Dynamic tables visualize how backing datasets change over time. To evaluate the effectiveness of these techniques, we study how first‐time Vega users debug interactions in faulty, unfamiliar specifications; with no prior knowledge, participants were able to accurately trace errors through the specification.  相似文献   

16.
In volume visualization, transfer functions are used to classify the volumetric data and assign optical properties to the voxels. In general, transfer functions are generated in a transfer function space, which is the feature space constructed by data values and properties derived from the data. If volumetric objects have the same or overlapping data values, it would be difficult to separate them in the transfer function space. In this paper, we present a rule‐enhanced transfer function design method that allows important structures of the volume to be more effectively separated and highlighted. We define a set of rules based on the local frequency distribution of volume attributes. A rule‐selection method based on a genetic algorithm is proposed to learn the set of rules that can distinguish the user‐specified target tissue from other tissues. In the rendering stage, voxels satisfying these rules are rendered with higher opacities in order to highlight the target tissue. The proposed method was tested on various volumetric datasets to enhance the visualization of important structures that are difficult to be visualized by traditional transfer function design methods. The results demonstrate the effectiveness of the proposed method.  相似文献   

17.
We present a user‐assisted video stabilization algorithm that is able to stabilize challenging videos when state‐of‐the‐art automatic algorithms fail to generate a satisfactory result. Current methods do not give the user any control over the look of the final result. Users either have to accept the stabilized result as is, or discard it should the stabilization fail to generate a smooth output. Our system introduces two new modes of interaction that allow the user to improve the unsatisfactory stabilized video. First, we cluster tracks and visualize them on the warped video. The user ensures that appropriate tracks are selected by clicking on track clusters to include or exclude them. Second, the user can directly specify how regions in the output video should look by drawing quadrilaterals to select and deform parts of the frame. These user‐provided deformations reduce undesirable distortions in the video. Our algorithm then computes a stabilized video using the user‐selected tracks, while respecting the user‐modified regions. The process of interactively removing user‐identified artifacts can sometimes introduce new ones, though in most cases there is a net improvement. We demonstrate the effectiveness of our system with a variety of challenging hand held videos.  相似文献   

18.
Data summarization allows analysts to explore datasets that may be too complex or too large to visualize in detail. Designers face a number of design and implementation choices when using summarization in visual analytics systems. While these choices influence the utility of the resulting system, there are no clear guidelines for the use of these summarization techniques. In this paper, we codify summarization use in existing systems to identify key factors in the design of summary visualizations. We use quantitative content analysis to systematically survey examples of visual analytics systems and enumerate the use of these design factors in data summarization. Through this analysis, we expose the relationship between design considerations, strategies for data summarization in visualization systems, and how different summarization methods influence the analyses supported by systems. We use these results to synthesize common patterns in real‐world use of summary visualizations and highlight open challenges and opportunities that these patterns offer for designing effective systems. This work provides a more principled understanding of design practices for summary visualization and offers insight into underutilized approaches.  相似文献   

19.
Going beyond established desktop interfaces, researchers have begun re‐thinking visualization approaches to make use of alternative display environments and more natural interaction modalities. In this paper, we investigate how spatially‐aware mobile displays and a large display wall can be coupled to support graph visualization and interaction. For that purpose, we distribute typical visualization views of classic node‐link and matrix representations between displays. The focus of our work lies in novel interaction techniques that enable users to work with personal mobile devices in combination with the wall. We devised and implemented a comprehensive interaction repertoire that supports basic and advanced graph exploration and manipulation tasks, including selection, details‐on‐demand, focus transitions, interactive lenses, and data editing. A qualitative study has been conducted to identify strengths and weaknesses of our techniques. Feedback showed that combining mobile devices and a wall‐sized display is useful for diverse graph‐related tasks. We also gained valuable insights regarding the distribution of visualization views and interactive tools among the combined displays.  相似文献   

20.
In many areas of medicine, visualization research can help with task simplification, abstraction or complexity reduction. A common visualization approach is to facilitate parameterization techniques which flatten a usually 3D object into a 2D plane. Within this state of the art report (STAR), we review such techniques used in medical visualization and investigate how they can be classified with respect to the handled data and the underlying tasks. Many of these techniques are inspired by mesh parameterization algorithms which help to project a triangulation in ?3 to a simpler domain in ?2. It is often claimed that this makes complex structures easier to understand and compare by humans and machines. Within this STAR we review such flattening techniques which have been developed for the analysis of the following medical entities: the circulation system, the colon, the brain, tumors, and bones. For each of these five application scenarios, we have analyzed the tasks and requirements, and classified the reviewed techniques with respect to a developed coding system. Furthermore, we present guidelines for the future development of flattening techniques in these areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号