首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Image vectorization is an important yet challenging problem, especially when the input image has rich content. In this paper, we develop a novel method for automatically vectorizing natural images with feature‐aligned quad‐dominant meshes. Inspired by the quadrangulation methods in 3D geometry processing, we propose a new directional field optimization technique by encoding the color gradients, sidestepping the explicit computing of salient image features. We further compute the anisotropic scales of the directional field by accommodating the distance among image features. Our method is fully automatic and efficient, which takes only a few seconds for a 400×400 image on a normal laptop. We demonstrate the effectiveness of the proposed method on various image editing applications.  相似文献   

2.
This survey provides an overview of perceptually motivated techniques for the visualization of medical image data, including physics‐based lighting techniques as well as illustrative rendering that incorporate spatial depth and shape cues. Additionally, we discuss evaluations that were conducted in order to study the perceptual effects of these visualization techniques as compared to conventional techniques. These evaluations assessed depth and shape perception with depth judgment, orientation matching, and related tasks. This overview of existing techniques and their evaluation serves as a basis for defining the evaluation process of medical visualizations and to discuss a research agenda.  相似文献   

3.
Inertial particles are finite‐sized objects traveling with a certain velocity that differs from the underlying carrying flow, i.e., they are mass‐dependent and subject to inertia. Their backward integration is in practice infeasible, since a slight change in the initial velocity causes extreme changes in the recovered position. Thus, if an inertial particle is observed, it is difficult to recover where it came from. This is known as the source inversion problem, which has many practical applications in recovering the source of airborne or waterborne pollutions. Inertial trajectories live in a higher dimensional spatio‐velocity space. In this paper, we show that this space is only sparsely populated. Assuming that inertial particles are released with a given initial velocity (e.g., from rest), particles may reach a certain location only with a limited set of possible velocities. In fact, with increasing integration duration and dependent on the particle response time, inertial particles converge to a terminal velocity. We show that the set of initial positions that lead to the same location form a curve. We extract these curves by devising a derived vector field in which they appear as tangent curves. Most importantly, the derived vector field only involves forward integrated flow map gradients, which are much more stable to compute than backward trajectories. After extraction, we interactively visualize the curves in the domain and display the reached velocities using glyphs. In addition, we encode the rate of change of the terminal velocity along the curves, which gives a notion for the convergence to the terminal velocity. With this, we present the first solution to the source inversion problem that considers actual inertial trajectories. We apply the method to steady and unsteady flows in both 2D and 3D domains.  相似文献   

4.
A bipartite graph is a powerful abstraction for modeling relationships between two collections. Visualizations of bipartite graphs allow users to understand the mutual relationships between the elements in the two collections, e.g., by identifying clusters of similarly connected elements. However, commonly‐used visual representations do not scale for the analysis of large bipartite graphs containing tens of millions of vertices, often resorting to an a‐priori clustering of the sets. To address this issue, we present the Who's‐Active‐On‐What‐Visualization (WAOW‐Vis) that allows for multiscale exploration of a bipartite social‐network without imposing an a‐priori clustering. To this end, we propose to treat a bipartite graph as a high‐dimensional space and we create the WAOW‐Vis adapting the multiscale dimensionality‐reduction technique HSNE. The application of HSNE for bipartite graph requires several modifications that form the contributions of this work. Given the nature of the problem, a set‐based similarity is proposed. For efficient and scalable computations, we use compressed bitmaps to represent sets and we present a novel space partitioning tree to efficiently compute similarities; the Sets Intersection Tree. Finally, we validate WAOW‐Vis on several datasets connecting Twitter‐users and ‐streams in different domains: news, computer science and politics. We show how WAOW‐Vis is particularly effective in identifying hierarchies of communities among social‐media users.  相似文献   

5.
Cartograms visualize quantitative data about a set of regions such as countries or states. There are several different types of cartograms and – for some – algorithms to automatically construct them exist. We focus on mosaic cartograms: cartograms that use multiples of simple tiles – usually squares or hexagons – to represent regions. Mosaic cartograms communicate well data that consist of, or can be cast into, small integer units (for example, electorial college votes). In addition, they allow users to accurately compare regions and can often maintain a (schematized) version of the input regions’ shapes. We propose the first fully automated method to construct mosaic cartograms. To do so, we first introduce mosaic drawings of triangulated planar graphs. We then show how to modify mosaic drawings into mosaic cartograms with low cartographic error while maintaining correct adjacencies between regions. We validate our approach experimentally and compare to other cartogram methods.  相似文献   

6.
Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval‐based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real‐world design.  相似文献   

7.
In volume visualization, transfer functions are used to classify the volumetric data and assign optical properties to the voxels. In general, transfer functions are generated in a transfer function space, which is the feature space constructed by data values and properties derived from the data. If volumetric objects have the same or overlapping data values, it would be difficult to separate them in the transfer function space. In this paper, we present a rule‐enhanced transfer function design method that allows important structures of the volume to be more effectively separated and highlighted. We define a set of rules based on the local frequency distribution of volume attributes. A rule‐selection method based on a genetic algorithm is proposed to learn the set of rules that can distinguish the user‐specified target tissue from other tissues. In the rendering stage, voxels satisfying these rules are rendered with higher opacities in order to highlight the target tissue. The proposed method was tested on various volumetric datasets to enhance the visualization of important structures that are difficult to be visualized by traditional transfer function design methods. The results demonstrate the effectiveness of the proposed method.  相似文献   

8.
Restricted Voronoi diagrams are a fundamental geometric structure used in many applications such as surface reconstruction from point sets or optimal transport. Given a set of sites V = { v k}nk=1 ? ?d and a mesh X with vertices in ?d connected by triangles, the restricted Voronoi diagram partitions X by computing for each site the portion of X for which the site is the nearest. The restricted Voronoi diagram is the intersection between the regular Voronoi diagram and the mesh. Depending on the site distribution or the ambient space dimension computing the regular Voronoi diagram may not be feasible using classical algorithms. In this paper, we extend Lévy and Bonneel's approach [ LB12 ] based on nearest neighbor queries. We show that their method is limited when the sites are not located on X . We propose a new algorithm for computing restricted Voronoi which reduces the number of sites considered for each triangle of the mesh and scales smoothly when the sites are far from the surface.  相似文献   

9.
We introduce a framework for the generation of polygonal gridshell architectural structures, whose topology is designed in order to excel in static performances. We start from the analysis of stress on the input surface and we use the resulting tensor field to induce an anisotropic nonEuclidean metric over it. This metric is derived by studying the relation between the stress tensor over a continuous shell and the optimal shape of polygons in a corresponding gridshell. Polygonal meshes with uniform density and isotropic cells under this metric exhibit variable density and anisotropy in Euclidean space, thus achieving a better distribution of the strain energy over their elements. Meshes are further optimized taking into account symmetry and regularity of cells to improve aesthetics. We experiment with quad meshes and hexdominant meshes, demonstrating that our gridshells achieve better static performances than stateoftheart gridshells.  相似文献   

10.
The labeling of data sets is a time‐consuming task, which is, however, an important prerequisite for machine learning and visual analytics. Visual‐interactive labeling (VIAL) provides users an active role in the process of labeling, with the goal to combine the potentials of humans and machines to make labeling more efficient. Recent experiments showed that users apply different strategies when selecting instances for labeling with visual‐interactive interfaces. In this paper, we contribute a systematic quantitative analysis of such user strategies. We identify computational building blocks of user strategies, formalize them, and investigate their potentials for different machine learning tasks in systematic experiments. The core insights of our experiments are as follows. First, we identified that particular user strategies can be used to considerably mitigate the bootstrap (cold start) problem in early labeling phases. Second, we observed that they have the potential to outperform existing active learning strategies in later phases. Third, we analyzed the identified core building blocks, which can serve as the basis for novel selection strategies. Overall, we observed that data‐based user strategies (clusters, dense areas) work considerably well in early phases, while model‐based user strategies (e.g., class separation) perform better during later phases. The insights gained from this work can be applied to develop novel active learning approaches as well as to better guide users in visual interactive labeling.  相似文献   

11.
One main task for domain experts in analysing their nD data is to detect and interpret class/cluster separations and outliers. In fact, an important question is, which features/dimensions separate classes best or allow a cluster‐based data classification. Common approaches rely on projections from nD to 2D, which comes with some challenges, such as: The space of projection contains an infinite number of items. How to find the right one? The projection approaches suffers from distortions and misleading effects. How to rely to the projected class/cluster separation? The projections involve the complete set of dimensions/features. How to identify irrelevant dimensions? Thus, to address these challenges, we introduce a visual analytics concept for the feature selection based on linear discriminative star coordinates (DSC), which generate optimal cluster separating views in a linear sense for both labeled and unlabeled data. This way the user is able to explore how each dimension contributes to clustering. To support to explore relations between clusters and data dimensions, we provide a set of cluster‐aware interactions allowing to smartly iterate through subspaces of both records and features in a guided manner. We demonstrate our features selection approach for optimal cluster/class separation analysis with a couple of experiments on real‐life benchmark high‐dimensional data sets.  相似文献   

12.
Given a set of rectangles embedded in the plane, we consider the problem of adjusting the layout to remove all overlap while preserving the orthogonal order of the rectangles. The objective is to minimize the displacement of the rectangles. We call this problem Minimum -Displacement Overlap Removal (mdor ). Our interest in this problem is motivated by the application of displaying metadata of archaeological sites. Because most existing overlap removal algorithms are not designed to minimize displacement while preserving orthogonal order, we present and compare several approaches which are tailored to our particular usecase. We introduce a new overlap removal heuristic which we call re Arrange . Although conceptually simple, it is very effective in removing the overlap while keeping the displacement small. Furthermore, we propose an additional procedure to repair the orthogonal order after every iteration, with which we extend both our new heuristic and PRISM, a widely used overlap removal algorithm. We compare the performance of both approaches with and without this order repair method. The experimental results indicate that re Arrange is very effective for heterogeneous input data where the overlap is concentrated in few dense regions.  相似文献   

13.
We present a natural extension of two‐dimensional parallel‐coordinates plots for revealing relationships in time‐dependent multi‐attribute data by building on the idea that time can be considered as the third dimension. A time slice through the visualization represents a certain point in time and can be viewed as a regular parallel‐coordinates display. A vertical slice through one of the axes of the parallel‐coordinates display would show a time‐series plot. For a focus‐and‐context Integration of both views, we embed time‐series plots between two adjacent axes of the parallel‐coordinates plot. Both time‐series plots are drawn using a pseudo three‐dimensional perspective with a single vanishing point. An independent parallel‐coordinates panel that connects the two perspectively displayed time‐series plots can move forward and backward in time to reveal changes in the relationship between the time‐dependent attributes. The visualization of time‐series plots in the context of the parallel‐coordinates plot facilitates the exploration of time‐related aspects of the data without the need to switch to a separate display. We provide a consistent set of tools for selecting and contrasting subsets of the data, which are important for various application domains.  相似文献   

14.
Aggregate scattering operators (ASOs) describe the overall scattering behavior of an asset (i.e., an object or volume, or collection thereof) accounting for all orders of its internal scattering. We propose a practical way to precompute and compactly store ASOs and demonstrate their ability to accelerate path tracing. Our approach is modular avoiding costly and inflexible scene‐dependent precomputation. This is achieved by decoupling light transport within and outside of each asset, and precomputing on a per‐asset level. We store the internal transport in a reduced‐dimensional subspace tailored to the structure of the asset geometry, its scattering behavior, and typical illumination conditions, allowing the ASOs to maintain good accuracy with modest memory requirements. The precomputed ASO can be reused across all instances of the asset and across multiple scenes. We augment ASOs with functionality enabling multi‐bounce importance sampling, fast short‐circuiting of complex light paths, and compact caching, while retaining rapid progressive preview rendering. We demonstrate the benefits of our ASOs by efficiently path tracing scenes containing many instances of objects with complex inter‐reflections or multiple scattering.  相似文献   

15.
This paper presents a tool that enables the direct editing of surface features in large point‐clouds or meshes. This is made possible by a novel multi‐scale analysis of unstructured point‐clouds that automatically extracts the number of relevant features together with their respective scale all over the surface. Then, combining this ingredient with an adequate multi‐scale decomposition allows us to directly enhance or reduce each feature in an independent manner. Our feature extraction is based on the analysis of the scale‐variations of locally fitted surface primitives combined with unsupervised learning techniques. Our tool may be applied either globally or locally, and millions of points are handled in real‐time. The resulting system enables users to accurately edit complex geometries with minimal interaction.  相似文献   

16.
Given a planar point set sampled from an object boundary, the process of approximating the original shape is called curve reconstruction. In this paper, a novel non‐parametric curve reconstruction algorithm based on Delaunay triangulation has been proposed and it has been theoretically proved that the proposed method reconstructs the original curve under ε‐sampling. Starting from an initial Delaunay seed edge, the algorithm proceeds by finding an appropriate neighbouring point and adding an edge between them. Experimental results show that the proposed algorithm is capable of reconstructing curves with different features like sharp corners, outliers, multiple objects, objects with holes, etc. The proposed method also works for open curves. Based on a study by a few users, the paper also discusses an application of the proposed algorithm for reconstructing hand drawn skip stroke sketches, which will be useful in various sketch based interfaces.  相似文献   

17.
    
Streamgraphs were popularized in 2008 when The New York Times used them to visualize box office revenues for 7500 movies over 21 years. The aesthetics of a streamgraph is affected by three components: the ordering of the layers, the shape of the lowest curve of the drawing, known as the baseline, and the labels for the layers. As of today, the ordering and baseline computation algorithms proposed in the paper of Byron and Wattenberg are still considered the state of the art. However, their ordering algorithm exploits statistical properties of the movie revenue data that may not hold in other data . In addition, the baseline optimization is based on a definition of visual energy that in some cases results in considerable amount of visual distortion. We offer an ordering algorithm that works well regardless of the properties of the input data , and propose a 1‐norm based definition of visual energy and the associated solution method that overcomes the limitation of the original baseline optimization procedure. Furthermore, we propose an efficient layer labeling algorithm that scales linearly to the data size in place of the brute‐force algorithm adopted by Byron and Wattenberg. We demonstrate the advantage of our algorithms over existing techniques on a number of real world data sets.  相似文献   

18.
Fast realistic rendering of objects in scattering media is still a challenging topic in computer graphics. In presence of participating media, a light beam is repeatedly scattered by media particles, changing direction and getting spread out. Explicitly evaluating this beam distribution would enable efficient simulation of multiple scattering events without involving costly stochastic methods. Narrow beam theory provides explicit equations that approximate light propagation in a narrow incident beam. Based on this theory, we propose a closed‐form distribution function for scattered beams. We successfully apply it to the image synthesis of scenes in which scattering occurs, and show that our proposed estimation method is more accurate than those based on the Wentzel‐Kramers‐Brillouin (WKB) theory.  相似文献   

19.
The selection of meaningful lines for 3D line data visualization has been intensively researched in recent years. Most approaches focus on single line fields where one line passes through each domain point. This paper presents a selection approach for sets of line fields which is based on a global optimization of the opacity of candidate lines. For this, existing approaches for single line fields are modified such that significantly larger amounts of line representatives are handled. Furthermore, time coherence is addressed for animations, making this the first approach that solves the line selection problem for 3D time‐dependent flow. We apply our technique to visualize dense sets of pathlines, sets of magnetic field lines, and animated sets of pathlines, streaklines and masslines.  相似文献   

20.
Shadow removal is a challenging problem and previous approaches often produce de‐shadowed regions that are visually inconsistent with the rest of the image. We propose an automatic shadow region harmonization approach that makes the appearance of a de‐shadowed region (produced using any previous technique) compatible with the rest of the image. We use a shadow‐guided patch‐based image synthesis approach that reconstructs the shadow region using patches sampled from non‐shadowed regions. This result is then refined based on the reconstruction confidence to handle unique textures. Qualitative comparisons over a wide range of images, and a quantitative evaluation on a benchmark dataset show that our technique significantly improves upon the state‐of‐the‐art.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号