首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
dataView is an application that allows scientists to fly visually through large, regularly‐gridded, time‐varying 3D datasets from their desktop computers. dataView works with data that has been divided into cubes and sub‐cubes (which we call ‘tiles’ and ‘subtiles’), sampled at three levels of detail and written to a terabyte data server built on a PC cluster. dataView is a networked application. The dataView client component that runs on the scientist's computer is used only for user interaction and rendering. The selection of data subtiles for any given scene, and the geometry computation performed on those subtiles to create the virtual world, are performed by dataView components run in parallel on nodes of the PC cluster. This paper describes how we instrumented and tuned the code for improved performance in a networked environment. We report on how we measured network performance, first by inducing network delay and then by running the dataView client component in Washington DC and the compute components in Los Angeles. We report on the effect that tile size, level of detail, and client CPU speed have on performance. We analyze what happens when the geometry computation is performed in parallel using MPI (Message Passing Interface) vs. in serial, and discuss the effect on performance of adding additional computational nodes. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
There are many visualizations that show the trajectory of a moving object to obtain insights in its behavior. In this user study, we test the performance of three of these visualizations with respect to three movement features that occur in vessel behavior. Our goal is to compare the recently presented vessel density by Willems et al. [ [WvdWvW09] ] with well‐known trajectory visualizations such as an animation of moving dots and the space‐time cube. We test these visualizations with common maritime analysis tasks by investigating the ability of users to find stopping objects, fast moving objects, and estimate the busiest routes in vessel trajectories. We test the robustness of the visualizations towards scalability and the influence of complex trajectories using small‐scale synthetic data sets. The performance is measured in terms of correctness and response time. The user test shows that each visualization type excels for correctness for a specific movement feature. Vessel density performs best for finding stopping objects, but does not perform significantly less than the remaining visualizations for the other features. Therefore, vessel density is a nice extension in the toolkit for analyzing trajectories of moving objects, in particular for vessel movements, since stops can be visualized better, and the performance for comparing lanes and finding fast movers is at a similar level as established trajectory visualizations.  相似文献   

3.
High Performance OLAP and Data Mining on Parallel Computers   总被引:2,自引:0,他引:2  
On-Line Analytical Processing (OLAP) techniques are increasingly being used in decision support systems to provide analysis of data. Queries posed on such systems are quite complex and require different views of data. Analytical models need to capture the multidimensionality of the underlying data, a task for which multidimensional databases are well suited. Multidimensional OLAP systems store data in multidimensional arrays on which analytical operations are performed. Knowledge discovery and data mining requires complex operations on the underlying data which can be very expensive in terms of computation time. High performance parallel systems can reduce this analysis time. Precomputed aggregate calculations in a Data Cube can provide efficient query processing for OLAP applications. In this article, we present algorithms for construction of data cubes on distributed-memory parallel computers. Data is loaded from a relational database into a multidimensional array. We present two methods, sort-based and hash-based for loading the base cube and compare their performances. Data cubes are used to perform consolidation queries used in roll-up operations using dimension hierarchies. Finally, we show how data cubes are used for data mining using Attribute Focusing techniques. We present results for these on the IBM-SP2 parallel machine. Results show that our algorithms and techniques for OLAP and data mining on parallel systems are scalable to a large number of processors, providing a high performance platform for such applications.  相似文献   

4.
Advances in technology coupled with the availability of low‐cost sensors have resulted in the continuous generation of large time series from several sources. In order to visually explore and compare these time series at different scales, analysts need to execute online analytical processing (OLAP) queries that include constraints and group‐by's at multiple temporal hierarchies. Effective visual analysis requires these queries to be interactive. However, while existing OLAP cube‐based structures can support interactive query rates, the exponential memory requirement to materialize the data cube is often unsuitable for large data sets. Moreover, none of the recent space‐efficient cube data structures allow for updates. Thus, the cube must be re‐computed whenever there is new data, making them impractical in a streaming scenario. We propose Time Lattice, a memory‐efficient data structure that makes use of the implicit temporal hierarchy to enable interactive OLAP queries over large time series. Time Lattice is a subset of a fully materialized cube and is designed to handle fast updates and streaming data. We perform an experimental evaluation which shows that the space efficiency of the data structure does not hamper its performance when compared to the state of the art. In collaboration with signal processing and acoustics research scientists, we use the Time Lattice data structure to design the Noise Profiler, a web‐based visualization framework that supports the analysis of noise from cities. We demonstrate the utility of Noise Profiler through a set of case studies.  相似文献   

5.
We present Lyra, an interactive environment for designing customized visualizations without writing code. Using drag‐and‐drop interactions, designers can bind data to the properties of graphical marks to author expressive visualization designs. Marks can be moved, rotated and resized using handles; relatively positioned using connectors; and parameterized by data fields using property drop zones. Lyra also provides a data pipeline interface for iterative, visual specification of data transformations and layout algorithms. Visualizations created with Lyra are represented as specifications in Vega, a declarative visualization grammar that enables sharing and reuse. We evaluate Lyra's expressivity and accessibility through diverse examples and studies with journalists and visualization designers. We find that Lyra enables users to rapidly develop customized visualizations, covering a design space comparable to existing programming‐based tools.  相似文献   

6.
封闭数据立方是一种有效的无损压缩技术,它去掉了数据立方中的冗余信息,从而有效降低了数据立方的存储空间、加快了计算速度,而且几乎不影响查询性能.Hadoop的MapReduce并行计算模型为数据立方的计算提供了技术支持,Hadoop的分布式文件系统HDFS为数据立方的存储提供了保障.为了节省存储空间、加快查询速度,在传统数据立方的基础上提出封闭直方图立方,它在封闭数据立方的基础上通过编码技术进一步节省了存储空间,通过建立索引加快了查询速度.Hadoop并行计算平台不论从扩展性还是均衡性都为封闭直方图立方提供了保证.实验证明:封闭直方图立方对数据立方进行了有效压缩,具有较高的查询性能,根据Hadoop的特点通过增加节点个数明显加快了计算速度.  相似文献   

7.
Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.  相似文献   

8.
A Genetic Selection Algorithm for OLAP Data Cubes   总被引:1,自引:0,他引:1  
Multidimensional data analysis, as supported by OLAP (online analytical processing) systems, requires the computation of many aggregate functions over a large volume of historically collected data. To decrease the query time and to provide various viewpoints for the analysts, these data are usually organized as a multidimensional data model, called data cubes. Each cell in a data cube corresponds to a unique set of values for the different dimensions and contains the metric of interest. The data cube selection problem is, given the set of user queries and a storage space constraint, to select a set of materialized cubes from the data cubes to minimize the query cost and/or the maintenance cost. This problem is known to be an NP-hard problem. In this study, we examined the application of genetic algorithms to the cube selection problem. We proposed a greedy-repaired genetic algorithm, called the genetic greedy method. According to our experiments, the solution obtained by our genetic greedy method is superior to that found using the traditional greedy method. That is, within the same storage constraint, the solution can greatly reduce the amount of query cost as well as the cube maintenance cost.  相似文献   

9.
Time‐series data is a common target for visual analytics, as they appear in a wide range of application domains. Typical tasks in analyzing time‐series data include identifying cyclic behavior, outliers, trends, and periods of time that share distinctive shape characteristics. Many methods for visualizing time series data exist, generally mapping the data values to positions or colors. While each can be used to perform a subset of the above tasks, none to date is a complete solution. In this paper we present a novel approach to time‐series data visualization, namely creating multivariate data records out of short subsequences of the data and then using multivariate visualization methods to display and explore the data in the resulting shape space . We borrow ideas from text analysis, where the use of N‐grams is a common approach to decomposing and processing unstructured text. By mapping each temporal N‐gram to a glyph, and then positioning the glyphs via PCA (basically a projection in shape space), many different kinds of patterns in the sequence can be readily identified. Interactive selection via brushing, in conjunction with linking to other visualizations, provides a wide range of tools for exploring the data. We validate the usefulness of this approach with examples from several application domains and tasks, comparing our methods with traditional time‐series visualizations.  相似文献   

10.
Detailed animation of 3D articulated body models is in principle desirable but is also a highly resource‐intensive task. Resource limitations are particularly critical in 3D visualizations of multiple characters in real‐time game sequences. We investigated to what extent observers perceptually process the level of detail in naturalistic character animations. Only if such processing occurs would it be justified to spend valuable resources on richness of detail. An experiment was designed to test the effectiveness of 3D body animation. Observers had to judge the level of overall skill exhibited by four simulated soccer teams. The simulations were based on recorded RoboCup simulation league games. Thus objective skill levels were known from the teams' placement in the tournament. The animations' level of detail was varied in four increasing steps of modelling complexity. Results showed that observers failed to notice the differences in detail. Nonetheless, clear effects of character animation on perceived skill were found. We conclude that character animation co‐determines perceptual judgements even when observers are completely unaware of these manipulations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

11.
On-line analytical processing (OLAP) typically involves complex aggregate queries over large datasets. The data cube has been proposed as a structure that materializes the results of such queries in order to accelerate OLAP. A significant fraction of the related work has been on Relational-OLAP (ROLAP) techniques, which are based on relational technology. Existing ROLAP cubing solutions mainly focus on “flat” datasets, which do not include hierarchies in their dimensions. Nevertheless, as shown in this paper, the nature of hierarchies introduces several complications into the entire lifecycle of a data cube including the operations of construction, storage, indexing, query processing, and incremental maintenance. This fact renders existing techniques essentially inapplicable in a significant number of real-world applications and mandates revisiting the entire cube lifecycle under the new perspective. In order to overcome this problem, the CURE algorithm has been recently proposed as an efficient mechanism to construct complete cubes over large datasets with arbitrary hierarchies and store them in a highly compressed format, compatible with the relational model. In this paper, we study the remaining phases in the cube lifecycle and introduce query-processing and incremental-maintenance algorithms for CURE cubes. These are significantly different from earlier approaches, which have been proposed for flat cubes constructed by other techniques and are inadequate for CURE due to its high compression rate and the presence of hierarchies. Our methods address issues such as cube indexing, query optimization, and lazy update policies. Especially regarding updates, such lazy approaches are applied for the first time on cubes. We demonstrate the effectiveness of CURE in all phases of the cube lifecycle through experiments on both real-world and synthetic datasets. Among the experimental results, we distinguish those that have made CURE the first ROLAP technique to complete the construction and usage of the cube of the highest-density dataset in the APB-1 benchmark (12 GB). CURE was in fact quite efficient on this, showing great promise with respect to the potential of the technique overall.  相似文献   

12.
The hierarchical edge bundle (HEB) method generates useful visualizations of dense graphs, such as social networks, but requires a predefined clustering hierarchy, and does not easily benefit from existing straight‐line visualization improvements. This paper proposes a new clustering approach that extracts the community structure of a network and organizes it into a hierarchy that is flatter than existing community‐based clustering approaches and maps better to HEB visualization. Our method not only discovers communities and generates clusters with better modularization qualities, but also creates a balanced hierarchy that allows HEB visualization of unstructured social networks without predefined hierarchies. Results on several data sets demonstrate that this approach clarifies real‐world communication, collaboration and competition network structure and reveals information missed in previous visualizations. We further implemented our techniques into a social network visualization application on facebook.com and let users explore the visualization and community clustering of their own social networks.  相似文献   

13.
This paper addresses a generic problem in remote sensing by aerial hyperspectral imaging systems, that is, very low spatial and spectral repeatability of image cubes. Most analysts are either unaware of this problem or just ignore it. Hyperspectral image cubes acquired in consecutive flights over the same target should ideally be identical. In practice, two consecutive flights over the same target usually yield significant differences between the image cubes. These differences are due to variations in: target characteristics, solar illumination, atmospheric conditions and errors of the imaging system proper. Manufacturers of remote sensing imaging systems use sophisticated equipment to accurately calibrate their instruments, using optimal illumination and constant environment conditions. From a user's perspective, these calibration procedures are only of marginal interest because repeatability is ‘target dependent’. The analyst of hyperspectral imagery is primarily interested in the reliability of the end product, i.e. the repeatability of two image cubes consecutively acquired over the same target, after radiometric calibration, geo‐referencing and atmospheric corrections. Clearly, when the non‐repeatability variance is similar in magnitude to the variance of the spectral or spatial information of interest, it would be impossible to use it for classification or quantification prediction modelling. We present a simple approach for objective assessment of spatial and spectral repeatability by multiple image cube acquisitions, wherein the imaging system views a barium sulphate (BaSO4) painted panel illuminated by a halogen lamp and by consecutive flights over a reference target. The data analysis is based on several indexes, which were developed for quantifying the spectral and spatial repeatability of hyperspectral image cubes and for detecting outlier voxels. The spectral repeatability information can be used to average less repeatable spectral bands or to exclude them from the analysis. The spatial repeatability information may be used for identifying less repeatable regions of the target. Outlier voxels should be excluded from the analysis because they are grossly erroneous data. Modus operandi for image cube acquisitions is provided, whereby the repeatability may be improved. Spatial and spectral averaging algorithms and software were developed for increasing the repeatability of image cubes in post‐processing.  相似文献   

14.
OLAP cubes provide exploratory query capabilities combining joins and aggregations at multiple granularity levels. However, cubes cannot intuitively or directly show the relationship between measures aggregated at different grouping levels. One prominent example is the percentage, which is widely used in most analytical applications. Considering this limitation, we introduce percentage cube as a generalized data cube that takes percentages as its basic measure. More precisely, a percentage cube shows the fractional relationship in every cuboid between each aggregated measure on several dimensions and its rolled-up measure aggregated by fewer dimensions. We propose the syntax and introduce query optimizations to materialize the percentage cube. We justify that percentage cubes are significantly harder to evaluate than standard data cubes because in addition to the exponential number of cuboids, there is an additional exponential number of grouping column pairs (grouping columns at the individual level and the total level) on which percentages are computed. We propose alternative methods to prune the cube to identify interesting percentages including a row count threshold, a percentage threshold, and selecting the top k percentages. We study percentage aggregations within the classification of distributive, algebraic, and holistic functions. Finally, we also consider the problem of incremental computation of percentage cube. Experiments compare our query optimizations with existing SQL functions, evaluate the impact and speed of lattice pruning methods and study the effectiveness of the incremental computation.  相似文献   

15.
Since the publication of the original Marching Cubes algorithm, numerous variations have been proposed for guaranteeing water-tight constructions of triangulated approximations of isosurfaces. Most approaches divide the 3D space into cubes that each occupy the space between eight neighboring samples of a regular lattice. The portion of the isosurface inside a cube may be computed independently of what happens in the other cubes, provided that the constructions for each pair of neighboring cubes agree along their common face. The portion of the isosurface associated with a cube may consist of one or more connected components, which we call sheets. The topology and combinatorial complexity of the isosurface is influenced by three types of decisions made during its construction: (1) how to connect the four intersection points on each ambiguous face, (2) how to form interpolating sheets for cubes with more than one loop, and (3) how to triangulate each sheet. To determine topological properties, it is only relevant whether the samples are inside or outside the object, and not their precise value, if there is one. Previously reported techniques make these decisions based on local—per cube—criteria, often using precomputed look-up tables or simple construction rules. Instead, we propose global strategies for optimizing several topological and combinatorial measures of the isosurfaces: triangle count, genus, and number of shells. We describe efficient implementations of these optimizations and the auxiliary data structures developed to support them.  相似文献   

16.
The purpose of multi‐run simulations is often to capture the variability of the output with respect to different initial settings. Comparative analysis of multi‐run spatio‐temporal simulation data requires us to investigate the differences in the dynamics of the simulations' changes over time. To capture the changes and differences, aggregated statistical information may often be insufficient, and it is desirable to capture the local differences between spatial data fields at different times and between different runs. To calculate the pairwise similarity between data fields, we generalize the concept of isosurface similarity from individual surfaces to entire fields and propose efficient computation strategies. The described approach can be applied considering a single scalar field for all simulation runs or can be generalized to a similarity measure capturing all data fields of a multi‐field data set simultaneously. Given the field similarity, we use multi‐dimensional scaling approaches to visualize the similarity in two‐dimensional or three‐dimensional projected views as well as plotting one‐dimensional similarity projections over time. Each simulation run is depicted as a polyline within the similarity maps. The overall visual analysis concept can be applied using our proposed field similarity or any other existing measure for field similarity. We evaluate our measure in comparison to popular existing measures for different configurations and discuss their advantages and limitations. We apply them to generate similarity maps for real‐world data sets within the overall concept for comparative visualization of multi‐run spatio‐temporal data and discuss the results.  相似文献   

17.
A novel top-down compression technique for data cubes is introduced and experimentally assessed in this paper. This technique considers the previously unrecognized case in which multiple Hierarchical Range Queries (HRQ), a very useful class of OLAP queries, must be evaluated against the target data cube simultaneously. This scenario makes traditional data cube compression techniques ineffective, as, contrary to the aim of our work, these techniques take into consideration one constraint only (e.g., a given storage space bound). The result of our study consists in introducing an innovative multiple-objective OLAP computational paradigm, and a hierarchical multidimensional histogram, whose main benefit is meaningfully implementing an intermediate compression of the input data cube able to simultaneously accommodate an even large family of different-in-nature HRQ. A complementary contribution of our work is represented by a wide experimental evaluation of the performance of our technique against both benchmark and real-life data cubes, also in comparison with state-of-the-art histogram-based compression techniques.  相似文献   

18.
We introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out‐of‐core representation, based on per‐frame levels of hierarchically tiled non‐redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low‐bitrate codec able to store into fixed‐size pages a variable‐rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time‐critical operations, while a near‐lossless representation is employed to support high‐quality static frame rendering. A flexible high‐speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object‐space and image‐space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high‐quality snapshots generated from near‐lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi‐billion‐voxel frames.  相似文献   

19.
We survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under‐explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction.  相似文献   

20.
马瑜  王利生 《计算机工程与设计》2007,28(22):5444-5446,5467
提出一种新的三维图像边缘曲面模型选择抽取算法,能以交互方式获取用户感兴趣的三维边缘曲面模型.用户在二维断层图像的切片区选择目标三维边缘曲面模型对应的二维区域,将选定坐标映射到三维区域,利用基于Laplacian算子的三维边缘检测算子检测三维区域内的部分边缘立方体,除噪后置为种子立方体,根据立方体共面和三维区域增长法原理追踪获得用户感兴趣的具有亚体素精度三维图像边缘曲面模型.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号