共查询到20条相似文献,搜索用时 15 毫秒
1.
Laplacian mesh compression, also known as high‐pass mesh coding, is a popular technique for efficiently storing both static and dynamic triangle meshes that gained further recognition with the advent of perceptual mesh distortion evaluation metrics. Currently, the usual rule of thumb that drives the decision for a mesh compression algorithm is whether or not accuracy in absolute scale is required: Laplacian mesh encoding is chosen when perceptual quality is the main objective, while other techniques provide better results in terms of mechanistic error measures such as mean squared error. In this work, we present a modification of the Laplacian mesh encoding algorithm that preserves its benefits while it substantially reduces the resulting absolute error. Our approach is based on analyzing the reconstruction stage and modifying the quantization of differential coordinates, so that the decoded result stays close to the input even in areas that are distant from anchor points. In our approach, we avoid solving an overdetermined system of linear equations and thus reduce data redundancy, improve conditioning and achieve faster processing. Our approach can be directly applied to both static and dynamic mesh compression and we provide quantitative results comparing our approach with the state of the art methods. 相似文献
2.
Lossy compression of motion capture data can alleviate the problems of efficient storage and transmission by exploiting the redundancy and the superfluous precision of the data. When considering the acceptable amount of distortion, perceptual issues have to be taken into account. Current state of the art methods reduce the data rate required for high quality storage of motion capture data using various techniques. Most of them, however, do not use the common tools of general data compression, such as the method of Lagrange multipliers, and thus they obtain sub‐optimal results, making it difficult to do a fair comparison of their performance. In this paper, we present a general preprocessing step based on Lagrange multipliers, which allows to rigorously adjust the precision in each of the degrees of freedom of the input data according to the amount of influence the given degree of freedom has on the overall distortion. We then present a simple compression method based on Principal Component Analysis, which in combination with the proposed preprocessing achieves significantly better results than current state of the art methods. It allows optimization with respect to various distortion metrics, and we discuss the choice of the metric in two common but distinct scenarios, proposing a perceptually oriented comparison metric based on the relation of the problem at hand to the problem of compression of dynamic meshes. 相似文献
3.
Development of geometry data compression techniques in the past years has been limited by the lack of a metric with proven correlation with human perception of mesh distortion. Many algorithms have been proposed, but usually the aim has been to minimise mean squared error, or some of its derivatives. In the field of dynamic mesh compression, the situation has changed with the recent proposal of the STED metric, which has been shown to capture the human perception of mesh distortion much better than previous metrics. In this paper we show how existing algorithms can be steered to provide optimal results with respect to this metric, and we propose a novel dynamic mesh compression algorithm, based on trajectory space PCA and Laplacian coordinates, specifically designed to minimise the newly proposed STED error. Our experiments show that using the proposed algorithm, we were able to reduce the required data rate by up to 50% while preserving the introduced STED error. 相似文献
4.
In this paper, we investigate compressed sensing principles to devise an in‐situ data reduction framework for visualization of volumetric datasets. We exploit the universality of the compressed sensing framework and show that the proposed method offers a refinable data reduction approach for volumetric datasets. The accurate reconstruction is obtained from partial Fourier measurements of the original data that are sensed without any prior knowledge of specific feature domains for the data. Our experiments demonstrate the superiority of surfacelets for efficient representation of volumetric data. Moreover, we establish that the accuracy of reconstruction can further improve once a more effective basis for a sparser representation of the data becomes available. 相似文献
5.
In this paper we present a new paradigm for the generation and retargeting of facial animation. Like a vast majority of the approaches that have adressed these topics, our formalism is built on blendshapes. However, where prior works have generally encoded facial geometry using a low dimensional basis of these blendshapes, we propose to encode facial dynamics by looking at blendshapes as a basis of forces rather than a basis of shapes. We develop this idea into a dynamic model that naturally combines the blendshapes paradigm with physics‐based techniques for the simulation of deforming meshes. Because it escapes the linear span of the shape basis through time‐integration and physics‐inspired simulation, this approach has a wider expressive range than previous blendshape‐based methods. Its inherent physically‐based formulation also enables the simulation of more advanced physical interactions, such as collision responses on lip contacts. 相似文献
6.
Generating a visually appealing human motion sequence using low‐dimensional control signals is a major line of study in the motion research area in computer graphics. We propose a novel approach that allows us to reconstruct full body human locomotion using a single inertial sensing device, a smartphone. Smartphones are among the most widely used devices and incorporate inertial sensors such as an accelerometer and a gyroscope. To find a mapping between a full body pose and smartphone sensor data, we perform low dimensional embedding of full body motion capture data, based on a Gaussian Process Latent Variable Model. Our system ensures temporal coherence between the reconstructed poses by using a state decomposition model for automatic phase segmentation. Finally, application of the proposed nonlinear regression algorithm finds a proper mapping between the latent space and the sensor data. Our framework effectively reconstructs plausible 3D locomotion sequences. We compare the generated animation to ground truth data obtained using a commercial motion capture system. 相似文献
7.
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated. 相似文献
8.
We present a novel approach to recording and computing panorama light fields. In contrast to previous methods that estimate panorama light fields from focal stacks or naive multi‐perspective image stitching, our approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features. Arbitrarily complex scenes can therefore be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces. 相似文献
9.
Restricted Voronoi diagrams are a fundamental geometric structure used in many applications such as surface reconstruction from point sets or optimal transport. Given a set of sites V = { v k}nk=1 ? ?d and a mesh X with vertices in ?d connected by triangles, the restricted Voronoi diagram partitions X by computing for each site the portion of X for which the site is the nearest. The restricted Voronoi diagram is the intersection between the regular Voronoi diagram and the mesh. Depending on the site distribution or the ambient space dimension computing the regular Voronoi diagram may not be feasible using classical algorithms. In this paper, we extend Lévy and Bonneel's approach [ LB12 ] based on nearest neighbor queries. We show that their method is limited when the sites are not located on X . We propose a new algorithm for computing restricted Voronoi which reduces the number of sites considered for each triangle of the mesh and scales smoothly when the sites are far from the surface. 相似文献
10.
Shao‐Chi Chen Hsin‐Yi Chen Yi‐Ling Chen Hsin‐Mu Tsai Bing‐Yu Chen 《Computer Graphics Forum》2014,33(7):289-297
Visual obstruction caused by a preceding vehicle is one of the key factors threatening driving safety. One possible solution is to share the first‐person‐view of the preceding vehicle to unveil the blocked field‐of‐view of the following vehicle. However, the geometric inconsistency caused by the camera‐eye discrepancy renders view sharing between different cars a very challenging task. In this paper, we present a first‐person‐perspective image rendering algorithm to solve this problem. Firstly, we contour unobstructed view as the transferred region, then by iteratively estimating local homography transformations and performing perspective‐adaptive warping using the estimated transformations, we are able to locally adjust the shape of the unobstructed view so that its perspective and boundary could be matched to that of the occluded region. Thus, the composited view is seamless in both the perceived perspective and photometric appearance, creating an impression as if the preceding vehicle is transparent. Our system improves the driver's visibility and thus relieves the burden on the driver, which in turn increases comfort. We demonstrate the usability and stability of our system by performing its evaluation with several challenging data sets collected from real‐world driving scenarios. 相似文献
11.
We present a multi‐view stereo reconstruction technique that directly produces a complete high‐fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist‐quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi‐view input images of the subject, while satisfying cross‐view, cross‐subject, and cross‐pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA‐based dimension reduction and denoising scheme. We demonstrate high‐fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state‐of‐the‐art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production‐quality mesh topologies. 相似文献
12.
We present GEMSe, an interactive tool for exploring and analyzing the parameter space of multi‐channel segmentation algorithms. Our targeted user group are domain experts who are not necessarily segmentation specialists. GEMSe allows the exploration of the space of possible parameter combinations for a segmentation framework and its ensemble of results. Users start with sampling the parameter space and computing the corresponding segmentations. A hierarchically clustered image tree provides an overview of variations in the resulting space of label images. Details are provided through exemplary images from the selected cluster and histograms visualizing the parameters and the derived output in the selected cluster. The correlation between parameters and derived output as well as the effect of parameter changes can be explored through interactive filtering and scatter plots. We evaluate the usefulness of GEMSe through expert reviews and case studies based on three different kinds of datasets: A synthetic dataset emulating the combination of 3D X‐ray computed tomography with data from K‐Edge spectroscopy, a three‐channel scan of a rock crystal acquired by a Talbot‐Lau grating interferometer X‐ray computed tomography device, as well as a hyperspectral image. 相似文献
13.
We present an Aortic Vortex Classification (AVOCLA) that allows to classify vortices in the human aorta semi‐automatically. Current medical studies assume a strong relation between cardiovascular diseases and blood flow patterns such as vortices. Such vortices are extracted and manually classified according to specific, unstandardized properties. We employ an agglomerative hierarchical clustering to group vortex‐representing path lines as basis for the subsequent classification. Classes are based on the vortex' size, orientation and shape, its temporal occurrence relative to the cardiac cycle as well as its spatial position relative to the vessel course. The classification results are presented by a 2D and 3D visualization technique. To confirm the usefulness of both approaches, we report on the results of a user study. Moreover, AVOCLA was applied to 15 datasets of healthy volunteers and patients with different cardiovascular diseases. The results of the semi‐automatic classification were qualitatively compared to a manually generated ground truth of two domain experts considering the vortex number and five specific properties. 相似文献
14.
We present the first visualization tool that enables a comparative depiction of structural stress tensor data for vessel walls of cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. Medical researchers emphasize the importance of analyzing the interaction of morphological and hemodynamic information for the patient‐specific rupture risk evaluation and treatment analysis. Tensor data such as the stress inside the aneurysm walls characterizes the interplay between the morphology and blood flow and seems to be an important rupture‐prone criterion. We use different glyph‐based techniques to depict local stress tensors simultaneously and compare their applicability to cerebral aneurysms in a user study. We thus offer medical researchers an effective visual exploration tool to assess the aneurysm rupture risk. We developed a GPU‐based implementation of our techniques with a flexible interactive data exploration mechanism. Our depictions are designed in collaboration with domain experts, and we provide details about the evaluation. 相似文献
15.
We present an example‐based approach for radiometrically linearizing photographs that takes as input a radiometrically linear exemplar image and a target regular uncalibrated image of the same scene, possibly from a different viewpoint and/or under different lighting. The output of our method is a radiometrically linearized version of the target image. Modeling the change in appearance of a small image patch seen from a different viewpoint and/or under different lighting as a linear 1D subspace, allows us to recast radiometric transfer in a form similar to classic radiometric calibration from exposure stacks. The resulting radiometric transfer method is lightweight and easy to implement. We demonstrate the accuracy and validity of our method on a variety of scenes. 相似文献
16.
Edit propagation is a technique that can propagate various image edits (e.g., colorization and recoloring) performed via user strokes to the entire image based on similarity of image features. In most previous work, users must manually determine the importance of each image feature (e.g., color, coordinates, and textures) in accordance with their needs and target images. We focus on representation learning that automatically learns feature representations only from user strokes in a single image instead of tuning existing features manually. To this end, this paper proposes an edit propagation method using a deep neural network (DNN). Our DNN, which consists of several layers such as convolutional layers and a feature combiner, extracts stroke‐adapted visual features and spatial features, and then adjusts the importance of them. We also develop a learning algorithm for our DNN that does not suffer from the vanishing gradient problem, and hence avoids falling into undesirable locally optimal solutions. We demonstrate that edit propagation with deep features, without manual feature tuning, can achieve better results than previous work. 相似文献
17.
Hsin‐I Chen Tse‐Ju Lin Xiao‐Feng Jian I‐Chao Shen Bing‐Yu Chen 《Computer Graphics Forum》2015,34(7):235-244
A person's handwriting appears differently within a typical range of variations, and the shapes of handwriting characters also show complex interaction with their nearby neighbors. This makes automatic synthesis of handwriting characters and paragraphs very challenging. In this paper, we propose a method for synthesizing handwriting texts according to a writer's handwriting style. The synthesis algorithm is composed by two phases. First, we create the multidimensional morphable models for different characters based on one writer's data. Then, we compute the cursive probability to decide whether each pair of neighboring characters are conjoined together or not. By jointly modeling the handwriting style and conjoined property through a novel trajectory optimization, final handwriting words can be synthesized from a set of collected samples. Furthermore, the paragraphs’ layouts are also automatically generated and adjusted according to the writer's style obtained from the same dataset. We demonstrate that our method can successfully synthesize an entire paragraph that mimic a writer's handwriting using his/her collected handwriting samples. 相似文献
18.
In volume visualization, noise in regions of homogeneous material and at boundaries between different materials poses a great challenge in extracting, analyzing and rendering features of interest. In this paper, we present a novel volume denoising / smoothing method based on the L0 gradient minimization framework. This framework globally controls how many voxels with a non‐zero gradient are in the result in order to approximate important features’ structures in a sparse way. This procedure can be solved quickly by the alternating optimization strategy with half‐quadratic splitting. While the proposed L0 volume gradient minimization method can effectively remove noise in homogeneous materials, a blurring‐sharpening strategy is proposed to diminish noise or smooth local details on the boundaries. This generates salient features with smooth boundaries and visually pleasing structures. We compare our method with the bilateral filter and anisotropic diffusion, and demonstrate the effectiveness and efficiency of our method with several volumes in different modalities. 相似文献
19.
Many image editing applications rely on the analysis of image patches. In this paper, we present a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity. Inspired by Word2Vec, we term our approach Patch2Vec. However, there is a significant difference between words and patches. Words have a fairly small and well defined dictionary. Image patches, on the other hand, have no such dictionary and the number of different patch types is not well defined. The problem is aggravated by the fact that each patch might contain several objects and textures. Moreover, Patch2Vec should be universal because it must be able to map never‐seen‐before texture to the vector space. The mapping is learned by analyzing the distribution of all natural patches. We use Convolutional Neural Networks (CNN) to learn Patch2Vec. In particular, we train a CNN on labeled images with a triplet‐loss objective function. The trained network encodes a given patch to a 128D vector. Patch2Vec is evaluated visually, qualitatively, and quantitatively. We then use several variants of an interactive single‐click image segmentation algorithm to demonstrate the power of our method. 相似文献
20.
Seungbae Bang Byungkuk Choi Roger Blanco i Ribera Meekyoung Kim Sung‐Hee Lee Junyong Noh 《Computer Graphics Forum》2015,34(7):123-132
Rigging is a core element in the process of bringing a 3D character to life. The rig defines and delimits the motions of the character and provides an interface for an animator with which to interact with the 3D character. The quality of the rig has a key impact on the expressiveness of the character. Creating a usable, rich, production ready rig is a laborious task requiring direct intervention by a trained professional because the goal is difficult to achieve with fully automatic methods. We propose a semi‐automatic rigging editing framework which eases the need for manual intervention while maintaining an important degree of control over the final rig. Starting by automatically generated base rig, we provide interactive operations which efficiently configure the skeleton structure and mesh skinning. 相似文献