首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In many application fields, data analysts have to deal with datasets that contain many expressions per item. The effective analysis of such multivariate datasets is dependent on the user's ability to understand both the intrinsic dimensionality of the dataset as well as the distribution of the dependent values with respect to the dimensions. In this paper, we propose a visualization model that enables the joint interactive visual analysis of multivariate datasets with respect to their dimensions as well as with respect to the actual data values. We describe a dual setting of visualization and interaction in items space and in dimensions space. The visualization of items is linked to the visualization of dimensions with brushing and focus+context visualization. With this approach, the user is able to jointly study the structure of the dimensions space as well as the distribution of data items with respect to the dimensions. Even though the proposed visualization model is general, we demonstrate its application in the context of a DNA microarray data analysis.  相似文献   

2.
3.
In Toponomics, the function protein pattern in cells or tissue (the toponome) is imaged and analyzed for applications in toxicology, new drug development and patient-drug-interaction. The most advanced imaging technique is robot-driven multi-parameter fluorescence microscopy. This technique is capable of co-mapping hundreds of proteins and their distribution and assembly in protein clusters across a cell or tissue sample by running cycles of fluorescence tagging with monoclonal antibodies or other affinity reagents, imaging, and bleaching in situ. The imaging results in complex multi-parameter data composed of one slice or a 3D volume per affinity reagent. Biologists are particularly interested in the localization of co-occurring proteins, the frequency of co-occurrence and the distribution of co-occurring proteins across the cell. We present an interactive visual analysis approach for the evaluation of multi-parameter fluorescence microscopy data in toponomics. Multiple, linked views facilitate the definition of features by brushing multiple dimensions. The feature specification result is linked to all views establishing a focus+context visualization in 3D. In a new attribute view, we integrate techniques from graph visualization. Each node in the graph represents an affinity reagent while each edge represents two co-occurring affinity reagent bindings. The graph visualization is enhanced by glyphs which encode specific properties of the binding. The graph view is equipped with brushing facilities. By brushing in the spatial and attribute domain, the biologist achieves a better understanding of the function protein patterns of a cell. Furthermore, an interactive table view is integrated which summarizes unique fluorescence patterns. We discuss our approach with respect to a cell probe containing lymphocytes and a prostate tissue section.  相似文献   

4.
5.
We present a natural extension of two‐dimensional parallel‐coordinates plots for revealing relationships in time‐dependent multi‐attribute data by building on the idea that time can be considered as the third dimension. A time slice through the visualization represents a certain point in time and can be viewed as a regular parallel‐coordinates display. A vertical slice through one of the axes of the parallel‐coordinates display would show a time‐series plot. For a focus‐and‐context Integration of both views, we embed time‐series plots between two adjacent axes of the parallel‐coordinates plot. Both time‐series plots are drawn using a pseudo three‐dimensional perspective with a single vanishing point. An independent parallel‐coordinates panel that connects the two perspectively displayed time‐series plots can move forward and backward in time to reveal changes in the relationship between the time‐dependent attributes. The visualization of time‐series plots in the context of the parallel‐coordinates plot facilitates the exploration of time‐related aspects of the data without the need to switch to a separate display. We provide a consistent set of tools for selecting and contrasting subsets of the data, which are important for various application domains.  相似文献   

6.
We describe a new conceptual methodology and related computational architecture called Knowledge‐based Navigation of Abstractions for Visualization and Explanation (KNAVE). KNAVE is a domain‐independent framework specific to the task of interpretation, summarization, visualization, explanation, and interactive exploration, in a context‐sensitive manner, of time‐oriented raw data and the multiple levels of higher level, interval‐based concepts that can be abstracted from these data. The KNAVE domain‐independent exploration operators are based on the relations defined in the knowledge‐based temporal‐abstraction problem‐solving method, which is used to abstract the data, and thus can directly use the domain‐specific knowledge base on which that method relies. Thus, the domain‐specific semantics are driving the domain‐independent visualization and exploration processes, and the data are viewed through a filter of domain‐specific knowledge. By accessing the domain‐specific temporal‐abstraction knowledge base and the domain‐specific time‐oriented database, the KNAVE modules enable users to query for domain‐specific temporal abstractions and to change the focus of the visualization, thus reusing for a different task (visualization and exploration) the same domain model acquired for abstraction purposes. We focus here on the methodology, but also describe a preliminary evaluation of the KNAVE prototype in a medical domain. Our experiment incorporated seven users, a large medical patient record, and three complex temporal queries, typical of guideline‐based care, that the users were required to answer and/or explore. The results of the preliminary experiment have been encouraging. The new methodology has potentially broad implications for planning, monitoring, explaining, and interactive data mining of time‐oriented data.  相似文献   

7.
Focus+context visualization integrates a visually accentuated representation of selected data items in focus (more details, more opacity, etc.) with a visually deemphasized representation of the rest of the data, i.e., the context. The role of context visualization is to provide an overview of the data for improved user orientation and improved navigation. A good overview comprises the representation of both outliers and trends. Up to now, however, context visualization not really treated outliers sufficiently. In this paper we present a new approach to focus+context visualization in parallel coordinates which is truthful to outliers in the sense that small-scale features are detected before visualization and then treated specially during context visualization. Generally, we present a solution which enables context visualization at several levels of abstraction, both for the representation of outliers and trends. We introduce outlier detection and context generation to parallel coordinates on the basis of a binned data representation. This leads to an output-oriented visualization approach which means that only those parts of the visualization process are executed which actually affect the final rendering. Accordingly, the performance of this solution is much more dependent on the visualization size than on the data size which makes it especially interesting for large datasets. Previous approaches are outperformed, the new solution was successfully applied to datasets with up to 3 million data records and up to 50 dimensions.  相似文献   

8.
In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering.  相似文献   

9.
Pathline glyphs     
Visualization of pathlines is common and highly relevant for the analysis of unsteady flow. However, pathlines can intersect, leading to visual clutter and perceptual issues. This makes it intrinsically difficult to provide expressive visualizations of the entire domain by an arrangement of multiple pathlines, in contrast to well‐established streamline placement techniques. We present an approach to reduce these problems. It is inspired by glyph‐based visualization and small multiples: we partition the domain into cells, each corresponding to a downscaled version of the entire domain. Inside these cells, a single downscaled pathline is drawn. On the overview scale, our pathline glyphs lead to emergent visual patterns that provide insight into time‐dependent flow behavior. Zooming‐in allows us to analyze individual pathlines in detail and compare neighboring lines. The overall approach is complemented with a context‐preserving zoom lens and interactive pathline‐based exploration. While we primarily target the visualization of 2D flow, we also address the extension to 3D. Our evaluation includes several examples, comparison to other flow visualization techniques, and a user study with domain experts.  相似文献   

10.
Parallel coordinate plots (PCPs) are commonly used in information visualization to provide insight into multi-variate data. These plots help to spot correlations between variables. PCPs have been successfully applied to unstructured datasets up to a few millions of points. In this paper, we present techniques to enhance the usability of PCPs for the exploration of large, multi-timepoint volumetric data sets, containing tens of millions of points per timestep. The main difficulties that arise when applying PCPs to large numbers of data points are visual clutter and slow performance, making interactive exploration infeasible. Moreover, the spatial context of the volumetric data is usually lost. We describe techniques for preprocessing using data quantization and compression, and for fast GPU-based rendering of PCPs using joint density distributions for each pair of consecutive variables, resulting in a smooth, continuous visualization. Also, fast brushing techniques are proposed for interactive data selection in multiple linked views, including a 3D spatial volume view. These techniques have been successfully applied to three large data sets: Hurricane Isabel (Vis'04 contest), the ionization front instability data set (Vis'08 design contest), and data from a large-eddy simulation of cumulus clouds. With these data, we show how PCPs can be extended to successfully visualize and interactively explore multi-timepoint volumetric datasets with an order of magnitude more data points.  相似文献   

11.
This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.  相似文献   

12.
13.
Volumetric datasets are increasingly used in medical applications. In many of these applications, visualization and interaction is generally performed on cross‐sectional two‐dimensional (2D) views of three‐dimensional (3D) imaging modalities. Displaying 3D volumetric medical datasets on traditional 2D screens can present problems such as occlusion and information overload, especially when multiple data sources are present. Displaying desired information while showing the relationship to the rest of the dataset(s) can be challenging. In this paper, we present an interactive focus + context visualization approach that uses the volumetric Magic Lens interaction paradigm. We propose to use the Magic Lens as a volumetric brush to perform volume editing tasks, therefore combining data exploration with volumetric editing. Polygon‐assisted ray casting methods are used for real‐time rendering and editing frame rates, while providing compact storage of editing states for undo/redo operations. We discuss the application of our methods to radiation therapy, which is an important cancer treatment modality. We envision that this approach will improve the treatment planning process by improving the therapists' understanding of information from various sources and will help identify if the alignment of the patient in the treatment room coincides with the prepared treatment plan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Analyzing molecular dynamics (MD) simulations is a key aspect to understand protein dynamics and function. With increasing computational power, it is now possible to generate very long and complex simulations, which are cumbersome to explore using traditional 3D animations of protein movements. Guided by requirements derived from multiple focus groups with protein engineering experts, we designed and developed a novel interactive visual analysis approach for long and crowded MD simulations. In this approach, we link a dynamic 3D focus+context visualization with a 2D chart of time series data to guide the detection and navigation towards important spatio‐temporal events. The 3D visualization renders elements of interest in more detail and increases the temporal resolution dependent on the time series data or the spatial region of interest. In case studies with different MD simulation data sets and research questions, we found that the proposed visual analysis approach facilitates exploratory analysis to generate, confirm, or reject hypotheses about causalities. Finally, we derived design guidelines for interactive visual analysis of complex MD simulation data.  相似文献   

15.
Over the past decade, computer scientists and psychologists have made great efforts to collect and analyze facial dynamics data that exhibit different expressions and emotions. Such data is commonly captured as videos and are transformed into feature‐based time‐series prior to any analysis. However, the analytical tasks, such as expression classification, have been hindered by the lack of understanding of the complex data space and the associated algorithm space. Conventional graph‐based time‐series visualization is also found inadequate to support such tasks. In this work, we adopt a visual analytics approach by visualizing the correlation between the algorithm space and our goal – classifying facial dynamics. We transform multiple feature‐based time‐series for each expression in measurement space to a multi‐dimensional representation in parameter space. This enables us to utilize parallel coordinates visualization to gain an understanding of the algorithm space, providing a fast and cost‐effective means to support the design of analytical algorithms.  相似文献   

16.
Brushing plays a central role in most modern visual analytics solutions and effective and efficient techniques for data selection are key to establishing a successful human‐computer dialogue. With this paper, we address the need for brushing techniques that are both fast, enabling a fluid interaction in visual data exploration and analysis, and also accurate, i.e., enabling the user to effectively select specific data subsets, even when their geometric delimination is non‐trivial. We present a new solution for a near‐perfect sketch‐based brushing technique, where we exploit a convolutional neural network (CNN) for estimating the intended data selection from a fast and simple click‐and‐drag interaction and from the data distribution in the visualization. Our key contributions include a drastically reduced error rate—now below 3%, i.e., less than half of the so far best accuracy—and an extension to a larger variety of selected data subsets, going beyond previous limitations due to linear estimation models.  相似文献   

17.
The growing sizes of volumetric data sets pose a great challenge for interactive visualization. In this paper, we present a feature-preserving data reduction and focus+context visualization method based on transfer function driven, continuous voxel repositioning and resampling techniques. Rendering reduced data can enhance interactivity. Focus+context visualization can show details of selected features in context on display devices with limited resolution. Our method utilizes the input transfer function to assign importance values to regularly partitioned regions of the volume data. According to user interaction, it can then magnify regions corresponding to the features of interest while compressing the rest by deforming the 3D mesh. The level of data reduction achieved is significant enough to improve overall efficiency. By using continuous deformation, our method avoids the need to smooth the transition between low and high-resolution regions as often required by multiresolution methods. Furthermore, it is particularly attractive for focus+context visualization of multiple features. We demonstrate the effectiveness and efficiency of our method with several volume data sets from medical applications and scientific simulations.  相似文献   

18.
Providing appropriate methods to facilitate the analysis of time-oriented data is a key issue in many application domains. In this paper, we focus on the unique role of the parameter time in the context of visually driven data analysis. We will discuss three major aspects - visualization, analysis, and the user. It will be illustrated that it is necessary to consider the characteristics of time when generating visual representations. For that purpose we take a look at different types of time and present visual examples. Integrating visual and analytical methods has become an increasingly important issue. Therefore, we present our experiences in temporal data abstraction, principal component analysis, and clustering of larger volumes of time-oriented data. The third main aspect we discuss is supporting user-centered visual analysis. We describe event-based visualization as a promising means to adapt the visualization pipeline to needs and tasks of users.  相似文献   

19.
20.
Sets of multiple scalar fields can be used to model many types of variation in data, such as uncertainty in measurements and simulations or time‐dependent behavior of scalar quantities. Many structural properties of such fields can be explained by dependencies between different points in the scalar field. Although these dependencies can be of arbitrary complexity, correlation, i.e., the linear dependency, already provides significant structural information. Existing methods for correlation analysis are usually limited to positive correlation, handle only local dependencies, or use combinatorial approximations to this continuous problem. We present a new approach for computing and visualizing correlated regions in sets of 2‐dimensional scalar fields. This paper describes the following three main contributions: (i) An algorithm for hierarchical correlation clustering resulting in a dendrogram, (ii) a generalization of topological landscapes for dendrogram visualization, and (iii) a new method for incorporating negative correlation values in the clustering and visualization. All steps are designed to preserve the special properties of correlation coefficients. The results are visualized in two linked views, one showing the cluster hierarchy as 2D landscape and the other providing a spatial context in the scalar field's domain. Different coloring and texturing schemes coupled with interactive selection support an exploratory data analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号