首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
We address the problem of the efficient visualization of large irregular volume data sets by exploiting a multiresolution model based on tetrahedral meshes. Multiresolution models, also called Level-Of-Detail (LOD) models, allow encoding the whole data set at a virtually continuous range of different resolutions. We have identified a set of queries for extracting meshes at variable resolution from a multiresolution model, based on field values, domain location, or opacity of the transfer function. Such queries allow trading off between resolution and speed in visualization. We define a new compact data structure for encoding a multiresolution tetrahedral mesh built through edge collapses to support selective refinement efficiently and show that such a structure has a storage cost from 3 to 5.5 times lower than standard data structures used for tetrahedral meshes. The data structures and variable resolution queries have been implemented together with state-of-the art visualization techniques in a system for the interactive visualization of three-dimensional scalar fields defined on tetrahedral meshes. Experimental results show that selective refinement queries can support interactive visualization of large data sets.  相似文献   

3.
In this paper, a new approach for centralised and distributed learning from spatial heterogeneous databases is proposed. The centralised algorithm consists of a spatial clustering followed by local regression aimed at learning relationships between driving attributes and the target variable inside each region identified through clustering. For distributed learning, similar regions in multiple databases are first discovered by applying a spatial clustering algorithm independently on all sites, and then identifying corresponding clusters on participating sites. Local regression models are built on identified clusters and transferred among the sites for combining the models responsible for identified regions. Extensive experiments on spatial data sets with missing and irrelevant attributes, and with different levels of noise, resulted in a higher prediction accuracy of both centralised and distributed methods, as compared to using global models. In addition, experiments performed indicate that both methods are computationally more efficient than the global approach, due to the smaller data sets used for learning. Furthermore, the accuracy of the distributed method was comparable to the centralised approach, thus providing a viable alternative to moving all data to a central location.  相似文献   

4.
While many data sets contain multiple relationships, depicting more than one data relationship within a single visualization is challenging. We introduce Bubble Sets as a visualization technique for data that has both a primary data relation with a semantically significant spatial organization and a significant set membership relation in which members of the same set are not necessarily adjacent in the primary layout. In order to maintain the spatial rights of the primary data relation, we avoid layout adjustment techniques that improve set cluster continuity and density. Instead, we use a continuous, possibly concave, isocontour to delineate set membership, without disrupting the primary layout. Optimizations minimize cluster overlap and provide for calculation of the isocontours at interactive speeds. Case studies show how this technique can be used to indicate multiple sets on a variety of common visualizations.  相似文献   

5.
Visualization techniques are of increasing importance in exploring and analyzing large amounts of multidimensional information. One important class of visualization techniques which is particularly interesting for visualizing very large multidimensional data sets is the class of pixel-oriented techniques. The basic idea of pixel-oriented visualization techniques is to represent as many data objects as possible on the screen at the same time by mapping each data value to a pixel of the screen and arranging the pixels adequately. A number of different pixel-oriented visualization techniques have been proposed in recent years and it has been shown that the techniques are useful for visual data exploration in a number of different application contexts. In this paper, we discuss a number of issues which are important in developing pixel-oriented visualization techniques. The major goal of this article is to provide a formal basis of pixel-oriented visualization techniques and show that the design decisions in developing them can be seen as solutions of well-defined optimization problems. This is true for the mapping of the data values to colors, the arrangement of pixels inside the subwindows, the shape of the subwindows, and the ordering of the dimension subwindows. The paper also discusses the design issues of special variants of pixel-oriented techniques for visualizing large spatial data sets  相似文献   

6.
In many applications, data is collected and indexed by geo-spatial location. Discovering interesting patterns through visualization is an important way of gaining insight about such data. A previously proposed approach is to apply local placement functions such as PixelMaps that transform the input data set into a solution set that preserves certain constraints while making interesting patterns more obvious and avoid data loss from overplotting. In experience, this family of spatial transformations can reveal fine structures in large point sets, but it is sometimes difficult to relate those structures to basic geographic features such as cities and regional boundaries. Recent information visualization research has addressed other types of transformation functions that make spatially-transformed maps with recognizable shapes. These types of spatial-transformation are called global shape functions. In particular, cartogram-based map distortion has been studied. On the other hand, cartogram-based distortion does not handle point sets readily. In this study, we present a framework that allows the user to specify a global shape function and a local placement function. We combine cartogram-based layout (global shape) with PixelMaps (local placement), obtaining some of the benefits of each toward improved exploration of dense geo-spatial data sets  相似文献   

7.
Data in its raw form can potentially contain valuable information, but much of that value is lost if it cannot be presented to a user in a way that is useful and meaningful. Data visualization techniques offer a solution to this issue. Such methods are especially useful in spatial data domains such as medical scan data and geophysical data. However, to properly see trends in data or to relate data from multiple sources, multiple-data set visualization techniques must be used. In research with the time-line paradigm, we have integrated multiple streaming data sources into a single visual interface. Data visualization takes place on several levels, from the visualization of query results in a time-line fashion to using multiple visualization techniques to view, analyze, and compare the data from the results. A significant contribution of this research effort is the extension and combination of existing research efforts into the visualization of multiple-data sets to create new and more flexible techniques. We specifically address visualization issues regarding clarity, speed, and interactivity. The developed visualization tools have also led recently to the visualization querying paradigm and challenge highlighted herein.  相似文献   

8.
In this paper, a new algorithm named polar self-organizing map (PolSOM) is proposed. PolSOM is constructed on a 2-D polar map with two variables, radius and angle, which represent data weight and feature, respectively. Compared with the traditional algorithms projecting data on a Cartesian map by using the Euclidian distance as the only variable, PolSOM not only preserves the data topology and the inter-neuron distance, it also visualizes the differences among clusters in terms of weight and feature. In PolSOM, the visualization map is divided into tori and circular sectors by radial and angular coordinates, and neurons are set on the boundary intersections of circular sectors and tori as benchmarks to attract the data with the similar attributes. Every datum is projected on the map with the polar coordinates which are trained towards the winning neuron. As a result, similar data group together, and data characteristics are reflected by their positions on the map. The simulations and comparisons with Sammon's mapping, SOM and ViSOM are provided based on four data sets. The results demonstrate the effectiveness of the PolSOM algorithm for multidimensional data visualization.  相似文献   

9.
可视化技术对于分析和探究大规模的多维数据集变得越来越重要,其中最重要的一种可视化技术是一种面向像素的可视化技术,其基本原理是将数据集中的每个数据值映射成屏幕上的一个像素并对这些像素按一定的规则充分地加以排列,以便将尽可能多的数据对象以人们熟悉的图形图像展现在屏幕上。递归模式技术是面向像素的可视化技术的一种,它基于简单地来回排列,允许用户参与定义结构和设置参数,主要适用于有自然顺序的数据集。在股票数据分析中,利用递归模式技术比较容易描述交易数据库中股票价格的变化情况,并预测股票的走势。  相似文献   

10.
Feature detection and display are the essential goals of the visualization process. Most visualization software achieves these goals by mapping properties of sampled intensity values and their derivatives to color and opacity. In this work, we propose to explicitly study the local frequency distribution of intensity values in broader neighborhoods centered around each voxel. We have found frequency distributions to contain meaningful and quantitative information that is relevant for many kinds of feature queries. Our approach allows users to enter predicate-based hypotheses about relational patterns in local distributions and render visualizations that show how neighborhoods match the predicates. Distributions are a familiar concept to nonexpert users, and we have built a simple graphical user interface for forming and testing queries interactively. The query framework readily applies to arbitrary spatial data sets and supports queries on time variant and multifield data. Users can directly query for classes of features previously inaccessible in general feature detection tools. Using several well-known data sets, we show new quantitative features that enhance our understanding of familiar visualization results.  相似文献   

11.
A spatial query interface has been designed and implemented in the object-oriented paradigm for heterogeneous data sets. The object-oriented approach presented is shown to be highly suitable for querying typical multiple heterogeneous sources of spatial data. The spatial query model takes into consideration two common components of spatial data: spatial location and attributes. Spatial location allows users to specify an area or a region of interest, also known as a spatial range query. Also, the spatial query allows users to query spatial orientation and relationships (geometric and topological relationships) among other spatial data within the selected area or region. Queries on the properties and values of attributes provide more detailed non-spatial characteristics of spatial data. A query model specific to spatial data involves exploitation of both spatial and attribute components. This paper presents a conceptual spatial query model of heterogeneous data sets based on the object-oriented data model used in the geospatial information distribution system (GIDS).  相似文献   

12.
In this paper we present a new approach to the interactive visual analysis of time‐dependent scientific data – both from measurements as well as from computational simulation – by visualizing a scalar function over time for each of tenthousands or even millions of sample points. In order to cope with overdrawing and cluttering, we introduce a new four‐level method of focus+context visualization. Based on a setting of coordinated, multiple views (with linking and brushing), we integrate three different kinds of focus and also the context in every single view. Per data item we use three values (from the unit interval each) to represent to which degree the data item is part of the respective focus level. We present a color compositing scheme which is capable of expressing all three values in a meaningful way, taking semantics and their relations amongst each other (in the context of our multiple linked view setup) into account. Furthermore, we present additional image‐based postprocessing methods to enhance the visualization of large sets of function graphs, including a texture‐based technique based on line integral convolution (LIC). We also propose advanced brushing techniques which are specific to the time‐dependent nature of the data (in order to brush patterns over time more efficiently). We demonstrate the usefulness of the new approach in the context of medical perfusion data.  相似文献   

13.
The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets.  相似文献   

14.
We present an algorithm for adaptively extracting and rendering isosurfaces from compressed time-varying volume data sets. Tetrahedral meshes defined by longest edge bisection are used to create a multiresolution representation of the volume in the spatial domain that is adapted overtime to approximate the time-varying volume. The reextraction of the isosurface at each time step is accelerated with the vertex programming capabilities of modern graphics hardware. A data layout scheme which follows the access pattern indicated by mesh refinement is used to access the volume in a spatially and temporally coherent manner. This data layout scheme allows our algorithm to be used for out-of-core visualization.  相似文献   

15.
Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are Smoothed Particle Hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations.  相似文献   

16.
Spatiotemporal Analysis of Sensor Logs using Growth Ring Maps   总被引:2,自引:0,他引:2  
Spatiotemporal analysis of sensor logs is a challenging research field due to three facts: a) traditional two-dimensional maps do not support multiple events to occur at the same spatial location, b) three-dimensional solutions introduce ambiguity and are hard to navigate, and c) map distortions to solve the overlap problem are unfamiliar to most users. This paper introduces a novel approach to represent spatial data changing over time by plotting a number of non-overlapping pixels, close to the sensor positions in a map. Thereby, we encode the amount of time that a subject spent at a particular sensor to the number of plotted pixels. Color is used in a twofold manner; while distinct colors distinguish between sensor nodes in different regions, the colors' intensity is used as an indicator to the temporal property of the subjects' activity. The resulting visualization technique, called growth ring maps, enables users to find similarities and extract patterns of interest in spatiotemporal data by using humans' perceptual abilities. We demonstrate the newly introduced technique on a dataset that shows the behavior of healthy and Alzheimer transgenic, male and female mice. We motivate the new technique by showing that the temporal analysis based on hierarchical clustering and the spatial analysis based on transition matrices only reveal limited results. Results and findings are cross-validated using multidimensional scaling. While the focus of this paper is to apply our visualization for monitoring animal behavior, the technique is also applicable for analyzing data, such as packet tracing, geographic monitoring of sales development, or mobile phone capacity planning.  相似文献   

17.
Database visualization helps users process, interpret and act upon large stored data sets. In this paper, we present a Java‐based 3D database visualization tool called J3DV. The J3DV tool successfully solved the problem of data management faced by many other visualization systems by integrating multiple data sources with the visualization tool. The tool utilizes two‐level mapping to transform the data into intermediate data that can be used to render graphs. Intermediate data offers better performance with a two‐tier cache. This visualization tool presents a sound framework, which has good extensibility for plugging in new data sources, supporting new data models and visual presentation types and allowing new graph layout algorithms. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
一种高分辨率的罗兰C接收机天波延迟估计技术   总被引:1,自引:0,他引:1  
该文提出了一种基于特征分解和多信号分类算法估计罗兰C无线电导航接收机天波延迟的新技术。它的创新处在于为接收机基准点的实时设置提供了一种新方法。常规接收机为了防止天波干扰,将基准点设置在一个固定位置上,导致基准点处信噪比受包络限制而较低,从而大大增加了对准基准点的时间。该文设计出一种估计天波延迟的高效处理方法,在低信噪比条件下分离出了地波和天波的到达时刻。进而设计出能根据天波延迟变化实时选择基准点最佳位置,并能利用地波到达时刻进行周期选择的新型接收机。由于此法可以用于增加基准点处的信噪比,减少了对准基准点的时间,因而能极大地提高现有罗兰C接收机的性能。该文最后还给出了实现此技术的硬件框图。  相似文献   

19.
We present the multivariate Bayesian scan statistic (MBSS), a general framework for event detection and characterization in multivariate spatial time series data. MBSS integrates prior information and observations from multiple data streams in a principled Bayesian framework, computing the posterior probability of each type of event in each space-time region. MBSS learns a multivariate Gamma-Poisson model from historical data, and models the effects of each event type on each stream using expert knowledge or labeled training examples. We evaluate MBSS on various disease surveillance tasks, detecting and characterizing outbreaks injected into three streams of Pennsylvania medication sales data. We demonstrate that MBSS can be used both as a “general” event detector, with high detection power across a variety of event types, and a “specific” detector that incorporates prior knowledge of an event’s effects to achieve much higher detection power. MBSS has many other advantages over previous event detection approaches, including faster computation and easy interpretation and visualization of results, and allows faster and more accurate event detection by integrating information from the multiple streams. Most importantly, MBSS can model and differentiate between multiple event types, thus distinguishing between events requiring urgent responses and other, less relevant patterns in the data.  相似文献   

20.
Meteorological research involves the analysis of multi-field, multi-scale, and multi-source data sets. In order to better understand these data sets, models and measurements at different resolutions must be analyzed. Unfortunately, traditional atmospheric visualization systems only provide tools to view a limited number of variables and small segments of the data. These tools are often restricted to two-dimensional contour or vector plots or three-dimensional isosurfaces. The meteorologist must mentally synthesize the data from multiple plots to glean the information needed to produce a coherent picture of the weather phenomenon of interest. In order to provide better tools to meteorologists and reduce system limitations, we have designed an integrated atmospheric visual analysis and exploration system for interactive analysis of weather data sets. Our system allows for the integrated visualization of 1D, 2D, and 3D atmospheric data sets in common meteorological grid structures and utilizes a variety of rendering techniques. These tools provide meteorologists with new abilities to analyze their data and answer questions on regions of interest, ranging from physics-based atmospheric rendering to illustrative rendering containing particles and glyphs. In this paper, we will discuss the use and performance of our visual analysis for two important meteorological applications. The first application is warm rain formation in small cumulus clouds. Here, our three-dimensional, interactive visualization of modeled drop trajectories within spatially correlated fields from a cloud simulation has provided researchers with new insight. Our second application is improving and validating severe storm models, specifically the Weather Research and Forecasting (WRF) model. This is done through correlative visualization of WRF model and experimental Doppler storm data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号