首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces.  相似文献   

2.
Metaballs are implicit surfaces widely used to model curved objects, represented by the isosurface of a density field defined by a set of points. Recently, the results of particle‐based simulations have been often visualized using a large number of metaballs, however, such visualizations have high rendering costs. In this paper we propose a fast technique for rendering metaballs on the GPU. Instead of using polygonization, the isosurface is directly evaluated in a per‐pixel manner. For such evaluation, all metaballs contributing to the isosurface need to be extracted along each viewing ray, on the limited memory of GPUs. We handle this by keeping a list of metaballs contributing to the isosurface and efficiently update it. Our method neither requires expensive precomputation nor acceleration data structures often used in existing ray tracing techniques. With several optimizations, we can display a large number of moving metaballs quickly.  相似文献   

3.
We propose a novel Persistent OcTree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of view-dependent isosurfaces, ray tracing, and isocontour slicing for high dimensional data. POT can be viewed as a hybrid data structure between the interval tree and the Branch-On-Need Octree (BONO) in the sense that it achieves the asymptotic bound of the interval tree for identifying the active cells corresponding to an isosurface and is more efficient than BONO for handling spatial queries. We encode a compact octree for each isovalue. Each such octree contains only the corresponding active cells, in such a way that the combined structure has linear space. The inherent hierarchical structure associated with the active cells enables very fast filtering of the active cells based on spatial constraints. We demonstrate the effectiveness of our approach by performing view-dependent isosurfacing on a wide variety of volumetric data sets and 4D isocontour slicing on the time-varying Richtmyer-Meshkov instability dataset.  相似文献   

4.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

5.
Seungtaik Oh  Bon Ki Koo   《Graphical Models》2007,69(3-4):211-218
A simple efficient method is proposed to reduce the total number of triangles in an isosurface extraction method based on tetrahedral decomposition. We slightly perturb the input volumetric data so that useless small and thin triangles are removed. The perturbed volumetric data contain the exact isovalues from which a mesh is extracted. Since the proposed method is a pre-process of an isosurface extraction, it is not necessary to modify the mesh structure unlike the other similar methods.  相似文献   

6.
Mesh Optimization for Polygonized Isosurfaces   总被引:3,自引:0,他引:3  
In this paper, we propose a method for improvement of isosurface polygonizations. Given an initial polygonization of an isosurface, we introduce a mesh evolution process initialized by the polygonization. The evolving mesh converges quickly to its limit mesh which provides with a high quality approximation of the isosurface even if the isosurface has sharp features, boundary, complex topology. To analyze how close the evolving mesh approaches its destined isosurface, we introduce error estimators measuring the deviations of the mesh vertices from the isosurface and mesh normals from the isosurface normals. A new technique for mesh editing with isosurfaces is also proposed. In particular, it can be used for creating carving effects.  相似文献   

7.
Classifiability-based omnivariate decision trees   总被引:1,自引:0,他引:1  
Top-down induction of decision trees is a simple and powerful method of pattern classification. In a decision tree, each node partitions the available patterns into two or more sets. New nodes are created to handle each of the resulting partitions and the process continues. A node is considered terminal if it satisfies some stopping criteria (for example, purity, i.e., all patterns at the node are from a single class). Decision trees may be univariate, linear multivariate, or nonlinear multivariate depending on whether a single attribute, a linear function of all the attributes, or a nonlinear function of all the attributes is used for the partitioning at each node of the decision tree. Though nonlinear multivariate decision trees are the most powerful, they are more susceptible to the risks of overfitting. In this paper, we propose to perform model selection at each decision node to build omnivariate decision trees. The model selection is done using a novel classifiability measure that captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of the subproblem at each node. The proposed approach is fast and does not suffer from as high a computational burden as that incurred by typical model selection algorithms. Empirical results over 26 data sets indicate that our approach is faster and achieves better classification accuracy compared to statistical model select algorithms.  相似文献   

8.
Enlarging the Margins in Perceptron Decision Trees   总被引:4,自引:0,他引:4  
Capacity control in perceptron decision trees is typically performed by controlling their size. We prove that other quantities can be as relevant to reduce their flexibility and combat overfitting. In particular, we provide an upper bound on the generalization error which depends both on the size of the tree and on the margin of the decision nodes. So enlarging the margin in perceptron decision trees will reduce the upper bound on generalization error. Based on this analysis, we introduce three new algorithms, which can induce large margin perceptron decision trees. To assess the effect of the large margin bias, OC1 (Journal of Artificial Intelligence Research, 1994, 2, 1–32.) of Murthy, Kasif and Salzberg, a well-known system for inducing perceptron decision trees, is used as the baseline algorithm. An extensive experimental study on real world data showed that all three new algorithms perform better or at least not significantly worse than OC1 on almost every dataset with only one exception. OC1 performed worse than the best margin-based method on every dataset.  相似文献   

9.
阐述了医学图像三维重建的方法和过程,提出了一种基于跨距空间的快速种子体素搜索算法,利用等值面繁衍算法快速提取三角形等值面,并对由中点近似产生的共面三角形等值面进行合并。实验结果显示,本文提出的算法加快了重建的速度,有利于实现对基于医学图像的大规模数据三维重建的实时绘制和交互。  相似文献   

10.
We present an efficient point-based isosurface exploration system with high quality rendering. Our system incorporates two point-based isosurface extraction and visualization methods: edge splatting and the edge kernel method. In a volume, two neighboring voxels define an edge. The intersection points between the active edges and the isosurface are used for exact isosurface representation. The point generation is incorporated in the GPU-based hardware-accelerated rendering, thus avoiding any overhead when changing the isovalue in the exploration. We call this method edge splatting. In order to generate high quality isosurface rendering regardless of the volume resolution and the view, we introduce an edge kernel method. The edge kernel upsamples the isosurface by subdividing every active cell of the volume data. Enough sample points are generated to preserve the exact shape of the isosurface defined by the trilinear interpolation of the volume data. By employing these two methods, we can achieve interactive isosurface exploration with high quality rendering.  相似文献   

11.
In this paper, we introduce the concept of isosurface similarity maps for the visualization of volume data. Iso‐surface similarity maps present structural information of a volume data set by depicting similarities between individual isosurfaces quantified by a robust information‐theoretic measure. Unlike conventional histograms, they are not based on the frequency of isovalues and/or derivatives and therefore provide complementary information. We demonstrate that this new representation can be used to guide transfer function design and visualization parameter specification. Furthermore, we use isosurface similarity to develop an automatic parameter‐free method for identifying representative isovalues. Using real‐world data sets, we show that isosurface similarity maps can be a useful addition to conventional classification techniques.  相似文献   

12.
We present a novel approach to out-of-core time-varying isosurface visualization. We attempt to interactively visualize time-varying datasets which are too large to fit into main memory using a technique which is dramatically different from existing algorithms. Inspired by video encoding techniques, we examine the data differences between time steps to extract isosurface information. We exploit span space extraction techniques to retrieve operations necessary to update isosurface geometry from neighboring time steps. Because only the changes between time steps need to be retrieved from disk, I/O bandwidth requirements are minimized. We apply temporal compression to further reduce disk access and employ a point-based previewing technique that is refined in idle interaction cycles. Our experiments on computational simulation data indicate that this method is an extremely viable solution to large time-varying isosurface visualization. Our work advances the state-of-the-art by enabling all isosurfaces to be represented by a compact set of operations.  相似文献   

13.
Speeding up isosurface extraction using interval trees   总被引:1,自引:0,他引:1  
The interval tree is an optimally efficient search structure proposed by Edelsbrunner (1980) to retrieve intervals on the real line that contain a given query value. We propose the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset. The resulting search method can be applied to both structured and unstructured volume datasets, and it can be applied incrementally to exploit coherence between isosurfaces. We also address issues of storage requirements, and operations other than the location of cells, whose impact is relevant in the whole isosurface extraction task. In the case of unstructured grids, the overhead, due to the search structure, is compatible with the storage cost of the dataset, and local coherence in the computation of isosurface patches is exploited through a hash table. In the case of a structured dataset, a new conceptual organization is adopted, called the chess-board approach, which exploits the regular structure of the dataset to reduce memory usage and to exploit local coherence. In both cases, efficiency in the computation of surface normals on the isosurface is obtained by a precomputation of the gradients at the vertices of the mesh. Experiments on different kinds of input show that the practical performance of the method reflects its theoretical optimality  相似文献   

14.
A shortcoming of univariate decision tree learners is that they do not learn intermediate concepts and select only one of the input features in the branching decision at each intermediate tree node. It has been empirically demonstrated that cascading other classification methods, which learn intermediate concepts, with decision tree learners can alleviate such representational bias of decision trees and potentially improve classification performance. However, a more complex model that fits training data better may not necessarily perform better on unseen data, commonly referred to as the overfitting problem. To find the most appropriate degree of such cascade generalization, a decision forest (i.e., a set of decision trees with other classification models cascaded to different degrees) needs to be generated, from which the best decision tree can then be identified. In this paper, the authors propose an efficient algorithm for generating such decision forests. The algorithm uses an extended decision tree data structure and constructs any node that is common to multiple decision trees only once. The authors have empirically evaluated the algorithm using 32 data sets for classification problems from the University of California, Irvine (UCI) machine learning repository and report on results demonstrating the efficiency of the algorithm in this paper.  相似文献   

15.
In this paper, we propose a novel technique for constructing multiple levels of a tetrahedral volume dataset whilepreserving the topologies of all isosurfaces embedded in the data. Our simplification technique has two majorphases. In the segmentation phase, we segment the volume data into topological‐equivalence regions, that is, thesub‐volumes within each of which all isosurfaces have the same topology. In the simplification phase, we simplifyeach topological‐equivalence region independently, one by one, by collapsing edges from the smallest to the largesterrors (within the user‐specified error tolerance, for a given error metrics), and ensure that we do not collapseedges that may cause an isosurface‐topology change. We also avoid creating a tetrahedral cell of negative volume(i.e., avoid the fold‐over problem). In this way, we guarantee to preserve all isosurface topologies in the entiresimplification process, with a controlled geometric error bound. Our method also involves several additionalnovel ideas, including using the Morse theory and the implicit fully augmented contour tree, identifying typesof edges that are not allowed to be collapsed, and developing efficient techniques to avoid many unnecessary orexpensive checkings, all in an integrated manner. The experiments show that all the resulting isosurfaces preservethe topologies, and have good accuracies in their geometric shapes. Moreover, we obtain nice data‐reductionrates, with competitively fast running times.  相似文献   

16.
In this paper we discuss the issues related to the development of efficient parallel implementations of the Marching Cubes algorithm, one of the most used methods for isosurface extraction, which is a fundamental operation for 3D data analysis and visualization. We present three possible parallelization strategies and we outline the pros and cons of each of them, considering isosurface extraction as stand‐alone operation or as part of a dynamic workflow. Our analysis shows that none of these implementations represents the most efficient solution for arbitrary situations. This is a major issue, because in many cases the quality of the service provided by a workflow depends on the possibility of selecting dynamically the operations to perform and, consequently, the more efficient basic building block for each stage. In this paper we present a set of guidelines that permits to achieve the highest performance for the extraction of isosurface in the most common situations, considering the characteristics of the data to process and of the workflow. These guidelines represent a suitable example to support the efficient configurations of workflows for 3D data processing in a dynamic and complex computing environment. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
Motivated by the desire to construct compact (in terms of expected length to be traversed to reach a decision) decision trees, we propose a new node splitting measure for decision tree construction. We show that the proposed measure is convex and cumulative and utilize this in the construction of decision trees for classification. Results obtained from several datasets from the UCI repository show that the proposed measure results in decision trees that are more compact with classification accuracy that is comparable to that obtained using popular node splitting measures such as Gain Ratio and the Gini Index.  相似文献   

18.
Access control policies may contain anomalies such as incompleteness and inconsistency, which can result in security vulnerabilities. Detecting such anomalies in large sets of complex policies automatically is a difficult and challenging problem. In this paper, we propose a novel method for detecting inconsistency and incompleteness in access control policies with the help of data classification tools well known in data mining. Our proposed method consists of three phases: firstly, we perform parsing on the policy data set; this includes ordering of attributes and normalization of Boolean expressions. Secondly, we generate decision trees with the help of our proposed algorithm, which is a modification of the well-known C4.5 algorithm. Thirdly, we execute our proposed anomaly detection algorithm on the resulting decision trees. The results of the anomaly detection algorithm are presented to the policy administrator who will take remediation measures. In contrast to other known policy validation methods, our method provides means for handling incompleteness, continuous values and complex Boolean expressions. In order to demonstrate the efficiency of our method in discovering inconsistencies, incompleteness and redundancies in access control policies, we also provide a proof-of-concept implementation.  相似文献   

19.
In this paper, we present a novel isosurface visualization technique that guarantees the accurate visualization of isosurfaces with complex attribute data defined on (un)structured (curvi)linear hexahedral grids. Isosurfaces of high-order hexahedral-based finite element solutions on both uniform grids (including MRI and CT scans) and more complex geometry representing a domain of interest that can be rendered using our algorithm. Additionally, our technique can be used to directly visualize solutions and attributes in isogeometric analysis, an area based on trivariate high-order NURBS (Non-Uniform Rational B-splines) geometry and attribute representations for the analysis. Furthermore, our technique can be used to visualize isosurfaces of algebraic functions. Our approach combines subdivision and numerical root finding to form a robust and efficient isosurface visualization algorithm that does not miss surface features, while finding all intersections between a view frustum and desired isosurfaces. This allows the use of view-independent transparency in the rendering process. We demonstrate our technique through a straightforward CPU implementation on both complex-structured and complex-unstructured geometries with high-order simulation solutions, isosurfaces of medical data sets, and isosurfaces of algebraic functions.  相似文献   

20.
We present a novel approach for visualizing the positional and geometrical variability of isosurfaces in uncertain 3D scalar fields. Our approach extends recent work by Pöthkow and Hege [ [PH10] ] in that it accounts for correlations in the data to determine more reliable isosurface crossing probabilities. We introduce an incremental update‐scheme that allows integrating the probability computation into front‐to‐back volume ray‐casting efficiently. Our method accounts for homogeneous and anisotropic correlations, and it determines for each sampling interval along a ray the probability of crossing an isosurface for the first time. To visualize the positional and geometrical uncertainty even under viewing directions parallel to the surface normal, we propose a new color mapping scheme based on the approximate spatial deviation of possible surface points from the mean surface. The additional use of saturation enables to distinguish between areas of high and low statistical dependence. Experimental results confirm the effectiveness of our approach for the visualization of uncertainty related to position and shape of convex and concave isosurface structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号