首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 877 毫秒
1.
Applying lossy data compression to climate model output is an attractive means of reducing the enormous volumes of data generated by climate models. However, because lossy data compression does not exactly preserve the original data, its application to scientific data must be done judiciously. To this end, a collection of measures is being developed to evaluate various aspects of lossy compression quality on climate model output. Given the importance of data visualization to climate scientists interacting with model output, any suite of measures must include a means of assessing whether images generated from the compressed model data are noticeably different from images based on the original model data. Therefore, in this work we conduct a forced‐choice visual evaluation study with climate model data that surveyed more than one hundred participants with domain relevant expertise. In addition to the images created from unaltered climate model data, study images are generated from model data that is subjected to two different types of lossy compression approaches and multiple levels (amounts) of compression. Study participants indicate whether a visual difference can be seen, with respect to the reference image, due to lossy compression effects. We assess the relationship between the perceptual scores from the user study to a number of common (full reference) image quality assessment (IQA) measures, and use statistical models to suggest appropriate measures and thresholds for evaluating lossily compressed climate data. We find the structural similarity index (SSIM) to perform the best, and our findings indicate that the threshold required for climate model data is much higher than previous findings in the literature.  相似文献   

2.
Diffusion curves are a powerful vector graphic representation that stores an image as a set of 2D Bezier curves with colors defined on either side. These colors are diffused over the image plane, resulting in smooth color regions as well as sharp boundaries. In this paper, we introduce a new automatic diffusion curve coloring algorithm. We start by defining a geometric heuristic for the maximum density of color control points along the image curves. Following this, we present a new algorithm to set the colors of these points so that the resulting diffused image is as close as possible to a source image in a least squares sense. We compare our coloring solution to the existing one which fails for textured regions, small features, and inaccurately placed curves. The second contribution of the paper is to extend the diffusion curve representation to include texture details based on Gabor noise. Like the curves themselves, the defined texture is resolution independent, and represented compactly. We define methods to automatically make an initial guess for the noise texure, and we provide intuitive manual controls to edit the parameters of the Gabor noise. Finally, we show that the diffusion curve representation itself extends to storing any number of attributes in an image, and we demonstrate this functionality with image stippling an hatching applications.  相似文献   

3.
We present a novel image resizing method which attempts to ensure that important local regions undergo a geometric similarity transformation, and at the same time, to preserve image edge structure. To accomplish this, we define handles to describe both local regions and image edges, and assign a weight for each handle based on an importance map for the source image. Inspired by conformal energy, which is widely used in geometry processing, we construct a novel quadratic distortion energy to measure the shape distortion for each handle. The resizing result is obtained by minimizing the weighted sum of the quadratic distortion energies of all handles. Compared to previous methods, our method allows distortion to be diffused better in all directions, and important image edges are well‐preserved. The method is efficient, and offers a closed form solution.  相似文献   

4.
5.
In this paper, we introduce the concept of isosurface similarity maps for the visualization of volume data. Iso‐surface similarity maps present structural information of a volume data set by depicting similarities between individual isosurfaces quantified by a robust information‐theoretic measure. Unlike conventional histograms, they are not based on the frequency of isovalues and/or derivatives and therefore provide complementary information. We demonstrate that this new representation can be used to guide transfer function design and visualization parameter specification. Furthermore, we use isosurface similarity to develop an automatic parameter‐free method for identifying representative isovalues. Using real‐world data sets, we show that isosurface similarity maps can be a useful addition to conventional classification techniques.  相似文献   

6.
We present a method for analytically calculating an anti‐aliased rasterization of arbitrary polygons or fonts bounded by Bézier curves in 2D as well as oriented triangle meshes in 3D. Our algorithm rasterizes multiple resolutions simultaneously using a hierarchical wavelet representation and is robust to degenerate inputs. We show that using the simplest wavelet, the Haar basis, is equivalent to performing a box‐filter to the rasterized image. Because we evaluate wavelet coefficients through line integrals in 2D, we are able to derive analytic solutions for polygons that have Bézier curve boundaries of any order, and we provide solutions for quadratic and cubic curves. In 3D, we compute the wavelet coefficients through analytic surface integrals over triangle meshes and show how to do so in a computationally efficient manner.  相似文献   

7.
Optimization of images with bad compositions has attracted increasing attention in recent years. Previous methods however seldomly consider image similarity when improving composition aesthetics. This may lead to significant content changes or bring large distortions, resulting in an unpleasant user experience. In this paper, we present a new algorithm for improving image composition aesthetics, while retaining faithful, as much as possible, to the original image content. Our method computes an improved image using a unified model of composition aesthetics and image similarity. The term of composition aesthetics obeys the rule of thirds and aims to enhance image composition. The similarity term in contrast penalizes image difference and distortion caused by composition adjustment. We use an edge‐based measure of structure similarity which nearly coincides with human visual perception to compare the optimized image with the original one. We describe an effective scheme to generate the optimized image with the objective model. Our algorithm is able to produce the recomposed images with minimal visual distortions in an elegant and user controllable manner. We show the superiority of our algorithm by comparing our results with those by previous methods.  相似文献   

8.
We present a novel particle‐based method for stable simulation of elasto‐plastic materials. The main contribution of our method is an implicit numerical integrator, using a physically‐based model, for computing particles that undergo both elastic and plastic deformations. The main advantage of our implicit integrator is that it allows the use of large time steps while still preserving stable and physically plausible simulation results. As a key component of our algorithm, at each time step we compute the particle positions and velocities based on a sparse linear system, which we solve efficiently on the graphics hardware. Compared to existing techniques, our method allows for a much wider range of stiffness and plasticity settings. In addition, our method can significantly reduce the computation cost for certain range of material types. We demonstrate fast and stable simulations for a variety of elasto‐plastic materials, ranging from highly stiffelastic materials to highly plastic ones.  相似文献   

9.
Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap. We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.  相似文献   

10.
Previous viewpoint selection methods in volume visualization are generally based on some deterministic measures of viewpoint quality. However, they may not express the familiarity and aesthetic sense of users for features of interest. In this paper, we propose an image‐based viewpoint selection model to learn how visualization experts choose representative viewpoints for volumes with similar features. For a given volume, we first collect images with similar features, and these images reflect the viewpoint preferences of the experts when visualizing these features. Each collected image tallies votes to the viewpoints with the best matching based on an image similarity measure, which evaluates the spatial shape and appearance similarity between the collected image and the rendered image from the viewpoint. The optimal viewpoint is the one with the most votes from the collected images, that is, the viewpoint chosen by most visualization experts for similar features. We performed experiments on various volumes available in volume visualization, and made comparisons with traditional viewpoint selection methods. The results demonstrate that our model can select more canonical viewpoints, which are consistent with human perception.  相似文献   

11.
Understanding symmetries and arrangements in existing content is the first step towards providing higher level content aware editing capabilities. Such capabilities may include edits that both preserve existing structure as well as synthesize entirely new structures based on the extracted pattern rules. In this paper we show how to detect regular symmetries and arrangement along curved segments in vector art. We determine individual elements in the art by using the transformation similarity for sequences of sample points on the input curves. Then we detect arrangements of those elements along an arbitrary curved path. We can un-warp the arrangement path to detect symmetries near the path. We introduce novel applications inform of editing elements that are arranged along a curved path. This includes their sliding along the path, changing of their spacing, or their scale. We also allow the user to brush the elements that the system recognized along new paths.  相似文献   

12.
Selecting good views of high-dimensional data using class consistency   总被引:2,自引:0,他引:2  
Many visualization techniques involve mapping high-dimensional data spaces to lower-dimensional views. Unfortunately, mapping a high-dimensional data space into a scatterplot involves a loss of information; or, even worse, it can give a misleading picture of valuable structure in higher dimensions. In this paper, we propose class consistency as a measure of the quality of the mapping. Class consistency enforces the constraint that classes of n–D data are shown clearly in 2–D scatterplots. We propose two quantitative measures of class consistency, one based on the distance to the class's center of gravity, and another based on the entropies of the spatial distributions of classes. We performed an experiment where users choose good views, and show that class consistency has good precision and recall. We also evaluate both consistency measures over a range of data sets and show that these measures are efficient and robust.  相似文献   

13.
This paper proposes two variants of a simple but efficient algorithm for structure‐preserving halftoning. Our algorithm extends Floyd‐Steinberg error diffusion; the goal of our extension is not only to produce good tone similarity but also to preserve structure and especially contrast, motivated by our intuition that human perception is sensitive to contrast. By enhancing contrast we attempt to preserve and enhance structure also. Our basic algorithm employs an adaptive, contrast‐aware mask. To enhance contrast, darker pixels should be more likely to be chosen as black pixels while lighter pixels should be more likely to be set as white. Therefore, when the positive error is diffused to nearby pixels in a mask, the dark pixels absorb less error and the light pixels absorb more. Conversely, negative error is distributed preferentially to dark pixels. We also propose using a mask with values that drop off steeply from the centre, intended to promote good spatial distribution. It is a very fast method whose speed mainly depends on the size of the mask. But this method suffers from distracting patterns. We then propose a variant on the basic idea which overcomes the first algorithm's shortcomings while maintaining its advantages through a priority‐aware scheme. Rather than proceeding in random or raster order, we sort the image first; each pixel is assigned a priority based on its up‐to‐date distance to black or to white, and pixels with extreme intensities are processed earlier. Since we use the same mask strategy as before, we promote good spatial distribution and high contrast. We use tone similarity, structure similarity, and contrast similarity to validate our algorithm. Comparisons with recent structure‐aware algorithms show that our method gives better results without sacrificing speed.  相似文献   

14.
Transfinite barycentric kernels are the continuous version of traditional barycentric coordinates and are used to define interpolants of values given on a smooth planar contour. When the data is two‐dimensional, i.e. the boundary of a planar map, these kernels may be conveniently expressed using complex number algebra, simplifying much of the notation and results. In this paper we develop some of the basic complex‐valued algebra needed to describe these planar maps, and use it to define similarity kernels, a natural alternative to the usual barycentric kernels. We develop the theory behind similarity kernels, explore their properties, and show that the transfinite versions of the popular three‐point barycentric coordinates (Laplace, mean value and Wachspress) have surprisingly simple similarity kernels. We furthermore show how similarity kernels may be used to invert injective transfinite barycentric mappings using an iterative algorithm which converges quite rapidly. This is useful for rendering images deformed by planar barycentric mappings.  相似文献   

15.
Clustering algorithms support exploratory data analysis by grouping inputs that share similar features. Especially the clustering of unlabelled data is said to be a fiendishly difficult problem, because users not only have to choose a suitable clustering algorithm but also a suitable number of clusters. The known issues of existing clustering validity measures comprise instabilities in the presence of noise and restrictive assumptions about cluster shapes. In addition, they cannot evaluate individual clusters locally. We present a new measure for assessing and comparing different clusterings both on a global and on a local level. Our measure is based on the topological method of persistent homology, which is stable and unbiased towards cluster shapes. Based on our measure, we also describe a new visualization that displays similarities between different clusterings (using a global graph view) and supports their comparison on the individual cluster level (using a local glyph view). We demonstrate how our visualization helps detect different—but equally valid—clusterings of data sets from multiple application domains.  相似文献   

16.
Collections of objects such as images are often presented visually in a grid because it is a compact representation that lends itself well for search and exploration. Most grid layouts are sorted using very basic criteria, such as date or filename. In this work we present a method to arrange collections of objects respecting an arbitrary distance measure. Pairwise distances are preserved as much as possible, while still producing the specific target arrangement which may be a 2D grid, the surface of a sphere, a hierarchy, or any other shape. We show that our method can be used for infographics, collection exploration, summarization, data visualization, and even for solving problems such as where to seat family members at a wedding. We present a fast algorithm that can work on large collections and quantitatively evaluate how well distances are preserved.  相似文献   

17.
We present a geometry processing framework that allows direct manipulation or preservation of positional, metric, and curvature constraints anywhere on the surface of a geometric model. Target values for these properties can be specified point-wise or as integrated quantities over curves and surface patches embedded in the shape. For example, the user can draw several curves on the surface and specify desired target lengths, manipulate the normal curvature along these curves, or modify the area or principal curvature distribution of arbitrary surface patches. This user input is converted into a set of non-linear constraints. A global optimization finds the new deformed surface that best satisfies the constraints, while minimizing adaptable measures for metric and curvature distortion that provide explicit control of the deformation semantics. We illustrate how this approach enables flexible surface processing and shape editing operations not available in current systems.  相似文献   

18.
Repeated scene elements are copious and ubiquitous in natural images. Cutout of those repeated elements usually involves tedious and laborious user interaction by previous image segmentation methods. In this paper, we present RepSnapping, a novel method oriented to cutout of repeated scene elements with much less user interaction. By exploring inherent similarity between repeated elements, a new optimization model is introduced to thread correlated elements in the segmentation procedure. The model proposed here enables efficient solution using max‐flow/min cut on an extended graph. Experiments indicate that RepSnapping facilitates cutout of repeated elements better than the state‐of‐the‐art interactive image segmentation and repetition detection methods.  相似文献   

19.
Path generation is an important problem in many fields, especially robotics. One way to create a path between a source point z and a target point y inside a complex planar domain Ω is to define a non‐negative distance function d(y, z), such that following the negative gradient of d (by z) traces out such a path. This presents two challenges: (1) The mathematical challenge of defining d, such that d(y, z) has a single minimum at z = y for any fixed y, because the gradient‐descent path may otherwise terminate at a local minimum before reaching y; (2) The computational challenge of defining d, such that it can be computed efficiently. Using the concepts of harmonic measure and f‐divergence, we show how to assign a set of reduced coordinates to each point in Ω and to define a family of distance functions based on these coordinates, such that both the mathematical and the computational challenge are met. Since in practice, especially in robotics applications, the path is often restricted to follow the edges of a discrete network defined on a finite set of sites sampled from Ω, any method that works well in the continuous setting must be discretized appropriately to preserve the important properties of the continuous case. We show how to define a network connecting a finite set of sites, such that a greedy routing algorithm, which is the discrete equivalent of continuous gradient descent, based on our reduced coordinates is guaranteed to generate a path in the network between any two sites. In many cases, this network is close to a planar graph, especially if the set of sites is dense. Guaranteeing the existence of a greedy route between any two points in the graph is a significant advantage in practical applications, avoiding the complexity of other path‐planning methods, such as the shortest‐path and A* algorithms. While the paths generated by our algorithm are not the shortest possible, in practice we found that they are close to that.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号