首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a method that generates natural and intuitive deformations via direct manipulation and smooth interpolation for multi‐element 2D shapes. Observing that the structural relationships between different parts of a multi‐element 2D shape are important for capturing its feature semantics, we introduce a simple structure called a feature frame to represent such relationships. A constrained optimization is solved for shape manipulation to find optimal deformed shapes under user‐specified handle constraints. Based on the feature frame, local feature preservation and structural relationship maintenance are directly encoded into the objective function. Beyond deforming a given multi‐element 2D shape into a new one at each key frame, our method can automatically generate a sequence of natural intermediate deformations by interpolating the shapes between the key frames. The method is computationally efficient, allowing real‐time manipulation and interpolation, as well as generating natural and visually plausible results.  相似文献   

2.
We present a novel L4RW (Laziness‐based Realistic Real‐time Responsive Rebalance in Walking) technique to synthesize 4RW animations under unexpected external perturbations with minimal locomotion effort. We first devise a lazy dynamic rebalance model, which specifies the dynamic balance conditions, defines the rebalance effort, and selects the suitable rebalance strategy automatically using the laziness law after an unexpected perturbation. Based on the model, L4RW searches over a motion capture (mocap) database for an appropriate motion segment to follow, and the transition‐to motions is generated by interpolating the active response dynamic motion. A support vector machine (SVM) based training, classification, and predication algorithm is applied to reduce the search space, and it is trained offline only once. Our algorithm classifies the mocap database into many rebalance strategy‐specified subsets and then online predicts responsive motions in the subset according to the selected strategy. The rebalance effort, the ‘extrapolated center of mass’ (XCoM) and environment constraints are selected as feature attributes for the SVM feature vector. Furthermore, the subset's segments are sorted through the rebalance effort, then our algorithm searches for an acceptable segment starting from the least‐effort segment. Compared with previous methods, our search increases speed by over two orders of magnitude, and our algorithm creates more realistic and smooth 4RW animation.  相似文献   

3.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

4.
Scatterplots directly depict two dimensions of multi‐dimensional data points, discarding all other information. To visualize all data, these plots are extended to scatterplot matrices, which distribute the information of each data point over many plots. Problems arising from the resulting visual complexity are nowadays alleviated by concepts like filtering and focus and context. We present a method based on depth of field that contains both aspects and injects information from all dimensions into each scatterplot. Our approach is a natural generalization of the commonly known focus effects from optics. It is based on a multidimensional focus selection body. Points outside of this body are defocused depending on their distance. Our method allows for a continuous transition from data points in focus, over regions of blurry points providing contextual information, to visually filtered data. Our algorithm supports different focus selection bodies, blur kernels, and point shapes. We present an optimized GPU‐based implementation for interactive exploration and show the usefulness of our approach on several data sets.  相似文献   

5.
Many works focus on multi‐spectral capture and analysis, but multi‐spectral display still remains a challenge. Most prior works on multi‐primary displays use ad‐hoc narrow band primaries that assure a larger color gamut, but cannot assure a good spectral reproduction. Content‐dependent spectral analysis is the only way to produce good spectral reproduction, but cannot be applied to general data sets. Wide primaries are better suited for assuring good spectral reproduction due to greater coverage of the spectral range, but have not been explored much. In this paper we explore the use of wide band primaries for accurate spectral reproduction for the first time and present the first content‐independent multi‐spectral display achieved using superimposed projections with modified wide band primaries. We present a content‐independent primary selection method that selects a small set of n primaries from a large set of m candidate primaries where m > n. Our primary selection method chooses primaries with complete coverage of the range of visible wavelength (for good spectral reproduction accuracy), low interdependency (to limit the primaries to a small number) and higher light throughput (for higher light efficiency). Once the primaries are selected, the input values of the different primary channels to generate a desired spectrum are computed using an optimization method that minimizes spectral mismatch while maximizing visual quality. We implement a real prototype of multi‐spectral display consisting of 9‐primaries using three modified conventional 3‐primary projectors, and compare it with a conventional display to demonstrate its superior performance. Experiments show our display is capable of providing large gamut assuring a good visual appearance while displaying any multi‐spectral images at a high spectral accuracy.  相似文献   

6.
The standard C/C++ implementation of a spatial partitioning data structure, such as octree and quadtree, is often inefficient in terms of storage requirements particularly when the memory overhead for maintaining parent‐to‐child pointers is significant with respect to the amount of actual data in each tree node. In this work, we present a novel data structure that implements uniform spatial partitioning without storing explicit parent‐to‐child pointer links. Our linkless tree encodes the storage locations of subdivided nodes using perfect hashing while retaining important properties of uniform spatial partitioning trees, such as coarse‐to‐fine hierarchical representation, efficient storage usage, and efficient random accessibility. We demonstrate the performance of our linkless trees using image compression and path planning examples.  相似文献   

7.
We present an automatic image‐recoloring technique for enhancing color contrast for dichromats whose computational cost varies linearly with the number of input pixels. Our approach can be efficiently implemented on GPUs, and we show that for typical image sizes it is up to two orders of magnitude faster than the current state‐of‐the‐art technique. Unlike previous approaches, ours preserve temporal coherence and, therefore, is suitable for video recoloring. We demonstrate the effectiveness of our technique by integrating it into a visualization system and showing, for the first time, real‐time high‐quality recolored visualizations for dichromats.  相似文献   

8.
We introduce a novel method for non‐rigid shape matching, designed to address the symmetric ambiguity problem present when matching shapes with intrinsic symmetries. Unlike the majority of existing methods which try to overcome this ambiguity by sampling a set of landmark correspondences, we address this problem directly by performing shape matching in an appropriate quotient space, where the symmetry has been identified and factored out. This allows us to both simplify the shape matching problem by matching between subspaces, and to return multiple solutions with equally good dense correspondences. Remarkably, both symmetry detection and shape matching are done without establishing any landmark correspondences between either points or parts of the shapes. This allows us to avoid an expensive combinatorial search present in most intrinsic symmetry detection and shape matching methods. We compare our technique with state‐of‐the‐art methods and show that superior performance can be achieved both when the symmetry on each shape is known and when it needs to be estimated.  相似文献   

9.
Our hybrid display model combines multiple automultiscopic elements volumetrically to support horizontal and vertical parallax at a larger depth of field and better accommodation cues compared to single layer elements. In this paper, we introduce a framework to analyze the bandwidth of such display devices. Based on this analysis, we show that multiple layers can achieve a wider depth of field using less bandwidth compared to single layer displays. We present a simple algorithm to distribute an input light field to multiple layers, and devise an efficient ray tracing algorithm for synthetic scenes. We demonstrate the effectiveness of our approach by both software simulation and two corresponding hardware prototypes.  相似文献   

10.
Statistical shape modeling is a widely used technique for the representation and analysis of the shapes and shape variations present in a population. A statistical shape model models the distribution in a high dimensional shape space, where each shape is represented by a single point. We present a design study on the intuitive exploration and visualization of shape spaces and shape models. Our approach focuses on the dual‐space nature of these spaces. The high‐dimensional shape space represents the population, whereas object space represents the shape of the 3D object associated with a point in shape space. A 3D object view provides local details for a single shape. The high dimensional points in shape space are visualized using a 2D scatter plot projection, the axes of which can be manipulated interactively. This results in a dynamic scatter plot, with the further extension that each point is visualized as a small version of the object shape that it represents. We further enhance the population‐object duality with a new type of view aimed at shape comparison. This new “shape evolution view” visualizes shape variability along a single trajectory in shape space, and serves as a link between the two spaces described above. Our three‐view exploration concept strongly emphasizes linked interaction between all spaces. Moving the cursor over the scatter plot or evolution views, shapes are dynamically interpolated and shown in the object view. Conversely, camera manipulation in the object view affects the object visualizations in the other views. We present a GPU‐accelerated implementation, and show the effectiveness of the three‐view approach using a number of real‐world cases. In these, we demonstrate how this multi‐view approach can be used to visually explore important aspects of a statistical shape model, including specificity, compactness and reconstruction error.  相似文献   

11.
Electrostatic halftoning, a sampling algorithm based on electrostatic principles, is among the leading methods for stippling, dithering and sampling. However, this approach is only applicable for a single class of dots with a uniform size and colour. In our work, we complement these ideas by advanced features for real‐world applications. We propose a versatile framework for colour halftoning, hatching and multi‐class importance sampling with individual weights. Our novel approach is the first method that globally optimizes the distribution of different objects in varying sizes relative to multiple given density functions. The quality, versatility and adaptability of our approach is demonstrated in various experiments.  相似文献   

12.
Higher‐order finite element methods have emerged as an important discretization scheme for simulation. They are increasingly used in contemporary numerical solvers, generating a new class of data that must be analyzed by scientists and engineers. Currently available visualization tools for this type of data are either batch oriented or limited to certain cell types and polynomial degrees. Other approaches approximate higher‐order data by resampling resulting in trade‐offs in interactivity and quality. To overcome these limitations, we have developed a distributed visualization system which allows for interactive exploration of non‐conforming unstructured grids, resulting from space‐time discontinuous Galerkin simulations, in which each cell has its own higher‐order polynomial solution. Our system employs GPU‐based raycasting for direct volume rendering of complex grids which feature non‐convex, curvilinear cells with varying polynomial degree. Frequency‐based adaptive sampling accounts for the high variations along rays. For distribution across a GPU cluster, the initial object‐space partitioning is determined by cell characteristics like the polynomial degree and is adapted at runtime by a load balancing mechanism. The performance and utility of our system is evaluated for different aeroacoustic simulations involving the propagation of shock fronts.  相似文献   

13.
Color transfer is an image processing technique which can produce a new image combining one source image's contents with another image's color style. While being able to produce convincing results, however, Reinhard et al.'s pioneering work has two problems—mixing up of colors in different regions and the fidelity problem. Many local color transfer algorithms have been proposed to resolve the first problem, but the second problem was paid few attentions. In this paper, a novel color transfer algorithm is presented to resolve the fidelity problem of color transfer in terms of scene details and colors. It's well known that human visual system is more sensitive to local intensity differences than to intensity itself. We thus consider that preserving the color gradient is necessary for scene fidelity. We formulate the color transfer problem as an optimization problem and solve it in two steps—histogram matching and a gradient‐preserving optimization. Following the idea of the fidelity in terms of color and gradient, we also propose a metric for objectively evaluating the performance of example‐based color transfer algorithms. The experimental results show the validity and high fidelity of our algorithm and that it can be used to deal with local color transfer.  相似文献   

14.
We present a practical real‐time approach for rendering lens‐flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first‐order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens flare‐producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically‐plausible images at high framerates on standard off‐the‐shelf graphics hardware.  相似文献   

15.
Developing motions for simulated humanoids remains a challenging problem. While there exists a multitude of approaches, few of these are reimplemented or reused by others. The predominant focus of papers in the area remains on algorithmic novelty, due to the difficulty and lack of incentive to more fully explore what can be accomplished within the scope of existing methodologies. We develop a language, based on common features found across physics‐based character animation research, that facilitates the controller authoring process. By specifying motion primitives over a number of phases, our language has been used to design over 25 controllers for motions ranging from simple static balanced poses, to highly dynamic stunts. Controller sequencing is supported in two ways. Naive integration of controllers is achieved by using highly stable pose controllers (such as a standing or squatting) as intermediate transitions. More complex controller connections are automatically learned through an optimization process. The robustness of our system is demonstrated via random walkthroughs of our integrated set of controllers.  相似文献   

16.
We propose a novel, multi‐resolution method to efficiently perform large‐scale cloth simulation. Our cloth simulation method is based on a triangle‐based energy model constructed from a cloth mesh. We identify that solutions of the linear system of cloth simulation are smooth in certain regions of the cloth mesh and solve the linear system on those regions in a reduced solution space. Then we reconstruct the original solutions by performing a simple interpolation from solutions computed in the reduced space. In order to identify regions where solutions are smooth, we propose simplification metrics that consider stretching, shear, and bending forces, as well as geometric collisions. Our multi‐resolution method can be applied to many existing cloth simulation methods, since our method works on a general linear system. In order to demonstrate benefits of our method, we apply our method into four large‐scale cloth benchmarks that consist of tens or hundreds of thousands of triangles. Because of the reduced computations, we achieve a performance improvement by a factor of up to one order of magnitude, with a little loss of simulation quality.  相似文献   

17.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

18.
Controlling a crowd using multi‐touch devices appeals to the computer games and animation industries, as such devices provide a high‐dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre‐defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data‐driven gesture‐based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run‐time gesture, our system extracts the K nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run‐time control. Our system is accurate and efficient, making it suitable for real‐time applications such as real‐time strategy games and interactive animation controls.  相似文献   

19.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

20.
A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号