首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study how to visualize large amounts of multidimensional data with a radial visualization. For such a visualization, we study a multi-threaded implementation on the CPU and the GPU. We start by reviewing the approaches that have visualized the largest multidimensional datasets and we focus on the approaches that have used CPU or GPU parallelization. We consider the radial visualizations and we describe our approach (called POIViz) that uses points of interest to determine a layout of a large dataset. We detail its parallelization on the CPU and the GPU. We study the efficiency of this approach with different configurations and for large datasets. We show that it can visualize, in less than one second, millions of data with tens of dimensions, and that it can support “real-time” interactions even for large datasets. We conclude on the advantages and limits of the proposed visualization.  相似文献   

2.
We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods.  相似文献   

3.
When users want to continue an analysis performed in the past, done by themselves or by a collaborator, they need an overview of what has been done and found so far. Such an overview helps them to gain a shared knowledge about each otherspsila analysis strategy and continue the analysis. We aim to support users in this process, and thereby support their exploration awareness. We present an information visualization framework with three linked processes: overview, search and retrieve for this purpose. First, we present a userpsilas information interest model that captures key aspects of the exploration process. Exploration overview, and keyword and similarity based search mechanisms are designed based on these key aspects. A metadata view is used to visualize the search results and help users to retrieve specific visualizations from past analysis. Finally, we present three case studies and discuss the support offered by the framework for developing exploration awareness.  相似文献   

4.
Humans use intuition and experience to classify everything they perceive, but only if the distinguishing patterns are visible. Machine-learning algorithms can learn class information from data sets, but the created classes' meaning isn't always clear. A proposed mixed-initiative approach combines intuitive visualizations with machine learning to tap into the strengths of human and machine classification. The use of visualizations in an expert-guided clustering technique allows the display of complex data sets in a way that allows human input into machine clustering. Test participants successfully employed this technique to classify analytic activities using behavioral observations of a creative-analysis task. The results demonstrate how visualization of the machine-learned classification can help users create more robust and intuitive categories.  相似文献   

5.
Data summarization allows analysts to explore datasets that may be too complex or too large to visualize in detail. Designers face a number of design and implementation choices when using summarization in visual analytics systems. While these choices influence the utility of the resulting system, there are no clear guidelines for the use of these summarization techniques. In this paper, we codify summarization use in existing systems to identify key factors in the design of summary visualizations. We use quantitative content analysis to systematically survey examples of visual analytics systems and enumerate the use of these design factors in data summarization. Through this analysis, we expose the relationship between design considerations, strategies for data summarization in visualization systems, and how different summarization methods influence the analyses supported by systems. We use these results to synthesize common patterns in real‐world use of summary visualizations and highlight open challenges and opportunities that these patterns offer for designing effective systems. This work provides a more principled understanding of design practices for summary visualization and offers insight into underutilized approaches.  相似文献   

6.

Scientific datasets are often difficult to analyse or visualize, due to their large size and high dimensionality. A multistep approach to address this problem is proposed. Data management techniques are used to identify areas of interest within the dataset. This allows the reduction of a dataset's size and dimensionality, and the estimation of missing values or correction of erroneous entries. The results are displayed using visualization techniques based on perceptual rules. The visualization tools are designed to exploit the power of the low-level human visual system. The result is a set of displays that allow users to perform rapid and accurate exploratory data analysis. In order to demonstrate the techniques, an environmental dataset being used to model salmon growth and migration patterns was visualized. Data mining was used to identify significant attributes and to provide accurate estimates of plankton density. Colour and texture were used to visualize the significant attributes and estimated plankton densities for each month for the years 1956-1964. Experiments run in the laboratory showed that the chosen colours and textures support rapid and accurate element identification, boundary detection, region tracking and estimation. The result is a visualization tool that allows users to quickly locate specific plankton densities and the boundaries they form. Users can compare plankton densities to other environmental conditions like sea surface temperature and current strength. Finally, users can track changes in any of the dataset's attributes on a monthly or yearly basis.  相似文献   

7.
This paper proposes a new experimental framework within which evidence regarding the perceptual characteristics of a visualization method can be collected, and describes how this evidence can be explored to discover principles and insights to guide the design of perceptually near-optimal visualizations. We make the case that each of the current approaches for evaluating visualizations is limited in what it can tell us about optimal tuning and visual design. We go on to argue that our new approach is better suited to optimizing the kinds of complex visual displays that are commonly created in visualization. Our method uses human-in-the-loop experiments to selectively search through the parameter space of a visualization method, generating large databases of rated visualization solutions. Data mining is then used to extract results from the database, ranging from highly specific exemplar visualizations for a particular data set, to more broadly applicable guidelines for visualization design. We illustrate our approach using a recent study of optimal texturing for layered surfaces viewed in stereo and in motion. We show that a genetic algorithm is a valuable way of guiding the human-in-the-loop search through visualization parameter space. We also demonstrate several useful data mining methods including clustering, principal component analysis, neural networks, and statistical comparisons of functions of parameters.  相似文献   

8.
Scientific datasets are often difficult to analyse or visualize, due to their large size and high dimensionality. A multistep approach to address this problem is proposed. Data management techniques are used to identify areas of interest within the dataset. This allows the reduction of a dataset's size and dimensionality, and the estimation of missing values or correction of erroneous entries. The results are displayed using visualization techniques based on perceptual rules. The visualization tools are designed to exploit the power of the low-level human visual system. The result is a set of displays that allow users to perform rapid and accurate exploratory data analysis. In order to demonstrate the techniques, an environmental dataset being used to model salmon growth and migration patterns was visualized. Data mining was used to identify significant attributes and to provide accurate estimates of plankton density. Colour and texture were used to visualize the significant attributes and estimated plankton densities for each month for the years 1956-1964. Experiments run in the laboratory showed that the chosen colours and textures support rapid and accurate element identification, boundary detection, region tracking and estimation. The result is a visualization tool that allows users to quickly locate specific plankton densities and the boundaries they form. Users can compare plankton densities to other environmental conditions like sea surface temperature and current strength. Finally, users can track changes in any of the dataset's attributes on a monthly or yearly basis.  相似文献   

9.
One reason for poor Web site usability is researchers' lack of understanding about how users surf for information. Predictive Web usage visualizations can help analysts uncover traffic patterns and usability problems. The author describes some of the techniques those in the User Interface Research Group at PARC have developed to visualize content changes, linkage structures, site usage, and so on. He explores how these techniques can be used to identify specific usability problems at large Web sites. As part of his work, he have created a predictive visualization model called Information Scent, in which simulated users help uncover patterns and deficiencies in information accessibility. He demonstrates these techniques using a prototype system called ScentViz to illustrate how Web usage analysis can be enhanced using visualization methods  相似文献   

10.
Color, as one of the most effective visual variables, is used in many techniques to encode and group data points according to different features. Relations between features and groups appear as visual patterns in the visualization. However, optical illusions may bias the perception at the first level of the analysis process. For instance, in pixel‐based visualizations contrast effects make pixels appear brighter if surrounded by a darker area, which distorts the encoded metric quantity of the data points. Even if we are aware of these perceptual issues, our visual cognition system is not able to compensate these effects accurately. To overcome this limitation, we present a color optimization algorithm based on perceptual metrics and color perception models to reduce physiological contrast or color effects. We evaluate our technique with a user study and find that the technique doubles the accuracy of users comparing and estimating color encoded data values. Since the presented technique can be used in any application without adaption to the visualization itself, we are able to demonstrate its effectiveness on data visualizations in different domains.  相似文献   

11.
Interactive visualizations have the potential to greatly enhance our ability to analyze data. Although the user is central to the use of such environments, perceptual and cognitive processes are not well understood within the context of interactive visualization. Although cognitive psychology has provided a greater understanding of visual perception and cognition, a theoretical framework that links this knowledge with the interactive visualization process is needed. This article aims to fill this gap by introducing a visual information processing model that facilitates the understanding of how users interact with interactive visualization environments.  相似文献   

12.
We present an evaluation of a parameterized set of 2D icon-based visualization methods where we quantified how perceptual interactions among visual elements affect effective data exploration. During the experiment, subjects quantified three different design factors for each method: the spatial resolution it could represent, the number of data values it could display at each point, and the degree to which it is visually linear. The class of visualization methods includes Poisson-disk distributed icons where icon size, icon spacing, and icon brightness can be set to a constant or coupled to data values from a 2D scalar field. By only coupling one of those visual components to data, we measured filtering interference for all three design factors. Filtering interference characterizes how different levels of the constant visual elements affect the evaluation of the data-coupled element. Our novel experimental methodology allowed us to generalize this perceptual information, gathered using ad-hoc artificial datasets, onto quantitative rules for visualizing real scientific datasets. This work also provides a framework for evaluating visualizations of multi-valued data that incorporate additional visual cues, such as icon orientation or color.  相似文献   

13.
In this paper, we present a novel illustrative multivariate visualization for geological modelling to assist geologists and reservoir engineers in visualizing multivariate datasets in superimposed representations, in contrast to the single‐attribute visualizations supported by commercial software. Our approach extends the use of decals from a single surface to 3D irregular grids, using the layering concept to represent multiple attributes. We also build upon prior work to augment the design and implementation of different geological attributes (namely, rock type, porosity, and permeability). More specifically, we propose a new sampling strategy to generate decals for porosity on the geological grid, a hybrid visualization for permeability which combines 2D decals and 3D ellipsoid glyphs, and a perceptually‐based design that allows us to visualize additional attributes (e.g., oil saturation) while avoiding visual interference between layers. Furthermore, our visual design draws from traditional geological illustrations, facilitating the understanding and communication between interdisciplinary teams. An evaluation by domain experts highlights the potential of our approach for geological modelling and interpretation in this complex domain.  相似文献   

14.
In this paper, we introduce overview visualization tools for large-scale multiple genome alignment data. Genome alignment visualization and, more generally, sequence alignment visualization are an important tool for understanding genomic sequence data. As sequencing techniques improve and more data become available, greater demand is being placed on visualization tools to scale to the size of these new datasets. When viewing such large data, we necessarily cannot convey details, rather we specifically design overview tools to help elucidate large-scale patterns. Perceptual science, signal processing theory, and generality provide a framework for the design of such visualizations that can scale well beyond current approaches. We present Sequence Surveyor, a prototype that embodies these ideas for scalable multiple whole-genome alignment overview visualization. Sequence Surveyor visualizes sequences in parallel, displaying data using variable color, position, and aggregation encodings. We demonstrate how perceptual science can inform the design of visualization techniques that remain visually manageable at scale and how signal processing concepts can inform aggregation schemes that highlight global trends, outliers, and overall data distributions as the problem scales. These techniques allow us to visualize alignments with over 100 whole bacterial-sized genomes.  相似文献   

15.
An extension to the n-Vision visualization system, which provides users with a 3D virtual world within which they can visualize and manipulate representations of multivariate relations is discussed. The extension, AutoVisual, is rule based system that eliminates the difficulty in choosing among the many alternative when designing visualizations. AutoVisual designs interactive virtual worlds for visualizing and exploring multivariate relations. It is guided by user-specified visualization tasks and a rule base of design principles. AutoVisual's visualization techniques and the visualization tasks it handles are described. Example visualizations AutoVisual has generated for two problem domains are discussed  相似文献   

16.
Interaction is critical to effective visualization, but can be difficult to author and debug due to dependencies among input events, program state, and visual output. Recent advances leverage reactive semantics to support declarative design and avoid the “spaghetti code” of imperative event handlers. While reactive programming improves many aspects of development, textual specifications still fail to convey the complex runtime dynamics. In response, we contribute a set of visual debugging techniques to reveal the runtime behavior of reactive visualizations. A timeline view records input events and dynamic variable updates, allowing designers to replay and inspect the propagation of values step‐by‐step. On‐demand annotations overlay the output visualization to expose relevant state and scale mappings in‐situ. Dynamic tables visualize how backing datasets change over time. To evaluate the effectiveness of these techniques, we study how first‐time Vega users debug interactions in faulty, unfamiliar specifications; with no prior knowledge, participants were able to accurately trace errors through the specification.  相似文献   

17.
We propose a novel scheme to visualize combinatorial auctions; auctions that involve the simultaneous sale of multiple items. Buyers bid on complementary sets of items, or bundles, where the utility of securing all the items in the bundle is more than the sum of the utility of the individual items. Our visualizations use concentric rings divided into arcs to visualize the bundles in an auction. The arcs’ positions and overlaps allow viewers to identify and follow bidding strategies. Properties of color, texture, and motion are used to represent different attributes of the auction, including active bundles, prices bid for each bundle, winning bids, and bidders’ interests. Keyframe animations are used to show changes in an auction over time. We demonstrate our visualization technique on a standard testbed dataset generated by researchers to evaluate combinatorial auction bid strategies, and on recent Federal Communications Commission (FCC) auctions designed to allocate wireless spectrum licenses to cell phone service providers.  相似文献   

18.

Association rules mining is a popular data mining modeling tool. It discovers interesting associations or correlation relationships among a large set of data items, showing attribute values that occur frequently together in a given dataset. Despite their great potential benefit, current association rules modeling tools are far from optimal. This article studies how visualization techniques can be applied to facilitate the association rules modeling process, particularly what visualization elements should be incorporated and how they can be displayed. Original designs for visualization of rules, integration of data and rule visualizations, and visualization of rule derivation process for supporting interactive visual association rules modeling are proposed in this research. Experimental results indicated that, compared to an automatic association rules modeling process, the proposed interactive visual association rules modeling can significantly improve the effectiveness of modeling, enhance understanding of the applied algorithm, and bring users greater satisfaction with the task. The proposed integration of data and rule visualizations can significantly facilitate understanding rules compared to their nonintegrated counterpart.  相似文献   

19.
Particle Flurries is an interactive approach to 3D flow visualization. The approach produces a "synoptic visualization" and is used to examine both internal and external flows. Synoptic visualizations give viewers a synopsis of all flow features simultaneously. Good examples of 2D synoptic visualizations are weather maps from both TV news reports and online Web pages. The human visual system is adept at finding patterns within larger contexts, and we hypothesize that synoptic visualization methods will help users find unexpected features more quickly and thus speed the understanding of complex 3D time-varying flows. Particle Flurries (PF) is our effort toward a synoptic visualization of complex pulsatile 3D flow.  相似文献   

20.
Analyzing, visualizing, and illustrating changes within time-varying volumetric data is challenging due to the dynamic changes occurring between timesteps. The changes and variations in computational fluid dynamic volumes and atmospheric 3D datasets do not follow any particular transformation. Features within the data move at different speeds and directions making the tracking and visualization of these features a difficult task. We introduce a texture-based feature tracking technique to overcome some of the current limitations found in the illustration and visualization of dynamic changes within time-varying volumetric data. Our texture-based technique tracks various features individually and then uses the tracked objects to better visualize structural changes. We show the effectiveness of our texture-based tracking technique with both synthetic and real world time-varying data. Furthermore, we highlight the specific visualization, annotation, registration, and feature isolation benefits of our technique. For instance, we show how our texture-based tracking can lead to insightful visualizations of time-varying data. Such visualizations, more than traditional visualization techniques, can assist domain scientists to explore and understand dynamic changes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号