首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Explicit parameterization of subdivision surfaces for texture mapping adds significant cost and complexity to film production. Most parameterization methods currently in use require setup effort, and none are completely general. We propose a new texture mapping method for Catmull‐Clark subdivision surfaces that requires no explicit parameterization. Our method, Ptex, stores a separate texture per quad face of the subdivision control mesh, along with a novel per‐face adjacency map, in a single texture file per surface. Ptex uses the adjacency data to perform seamless anisotropic filtering of multi‐resolution textures across surfaces of arbitrary topology. Just as importantly, Ptex requires no manual setup and scales to models of arbitrary mesh complexity and texture detail. Ptex has been successfully used to texture all of the models in an animated theatrical short and is currently being applied to an entire animated feature. Ptex has eliminated UV assignment from our studio and significantly increased the efficiency of our pipeline.  相似文献   

2.
We present a method to automatically convert videos and CG animations to stylized animated line drawings. Using a data‐driven approach, the animated drawings can follow the sketching style of a specific artist. Given an input video, we first extract edges from the video frames and vectorize them to curves. The curves are matched to strokes from an artist's library, while following the artist's stroke distribution and characteristics. The key challenge in this process is to match the large number of curves in the frames over time, despite topological and geometric changes, allowing to maintain temporal coherence in the output animation. We solve this problem using constrained optimization to build correspondences between tracked points and create smooth sheets over time. These sheets are then replaced with strokes from the artist's database to render the final animation. We evaluate our tracking algorithm on various examples and show stylized animation results based on various artists.  相似文献   

3.
We present a complete approach to efficiently deriving a varying level‐of‐detail segmentation of arbitrary animated objects. An over‐segmentation is built by combining sets of initial segments computed for each input pose, followed by a fast progressive simplification which aims at preserving rigid segments. The final segmentation result can be efficiently adjusted for cases where pose editing is performed or new poses are added at arbitrary positions in the mesh animation sequence. A smooth view of pose‐to‐pose segmentation transitions is offered by merging the partitioning of the current pose with that of the next pose. A perceptually friendly visualization scheme is also introduced for propagating segment colors between consecutive poses. We report on the efficiency and quality of our framework as compared to previous methods under a variety of skeletal and highly deformable mesh animations.  相似文献   

4.
Quick creation of 3D character animations is an important task in game design, simulations, forensic animation, education, training, and more. We present a framework for creating 3D animated characters using a simple sketching interface coupled with a large, unannotated motion database that is used to find the appropriate motion sequences corresponding to the input sketches. Contrary to the previous work that deals with static sketches, our input sketches can be enhanced by motion and rotation curves that improve matching in the context of the existing animation sequences. Our framework uses animated sequences as the basic building blocks of the final animated scenes, and allows for various operations with them such as trimming, resampling, or connecting by use of blending and interpolation. A database of significant and unique poses, together with a two-pass search running on the GPU, allows for interactive matching even for large amounts of poses in a template database. The system provides intuitive interfaces, an immediate feedback, and poses very small requirements on the user. A user study showed that the system can be used by novice users with no animation experience or artistic talent, as well as by users with an animation background. Both groups were able to create animated scenes consisting of complex and varied actions in less than 20 minutes.  相似文献   

5.
Automatic Conversion of Mesh Animations into Skeleton-based Animations   总被引:1,自引:0,他引:1  
Recently, it has become increasingly popular to represent animations not by means of a classical skeleton‐based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post‐processing and modification of mesh animations. To bridge this gap between the mesh‐based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully‐automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.  相似文献   

6.
Geometric meshes that model animated characters must be designed while taking into account the deformations that the shape will undergo during animation. We analyze an input sequence of meshes with point‐to‐point correspondence, and we automatically produce a quadrangular mesh that fits well the input animation. We first analyze the local deformation that the surface undergoes at each point, and we initialize a cross field that remains as aligned as possible to the principal directions of deformation throughout the sequence. We then smooth this cross field based on an energy that uses a weighted combination of the initial field and the local amount of stretch. Finally, we compute a field‐aligned quadrangulation with an off‐the‐shelf method. Our technique is fast and very simple to implement, and it significantly improves the quality of the output quad mesh and its suitability for character animation, compared to creating the quad mesh based on a single pose. We present experimental results and comparisons with a state‐of‐the‐art quadrangulation method, on both sequences from 3D scanning and synthetic sequences obtained by a rough animation of a triangulated model.  相似文献   

7.
In this paper, we present an efficient level of detail algorithm for texture‐based flow visualization. Our goal is to enhance visual perception and performance and generate smooth animation. To achieve our goal, we first model an adaptive input texture taking into account flow patterns to output view‐dependent high‐quality images. Then, we compute field lines only from sparse sampling points of the input noise texture for outputting volume line integral convolution textures and skip empty space utilizing two quantized binary histograms. To improve image quality, we implement anti‐aliasing through adjusting the line integral convolution step size and thickness of trajectory lines with an opacity function. We further extend our solution to unsteady flow. Flow structures and evolution are clearly shown through smooth animation achieved with coherent evolution of particles, handling of discontinuous flow lines, and spatio‐temporal linear constraint of the underlying noise volume. In the result section, we show high‐quality level of detail of three‐dimensional texture‐based flow visualization with high performance. We also demonstrate that our algorithm can achieve smooth evolution for unsteady flow with spatio‐temporal coherence. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Rendering detailed animated characters is a major limiting factor in crowd simulation. In this paper we present a new representation for 3D animated characters which supports output‐sensitive rendering. Our approach is flexible in the sense that it does not require us to pre‐define the animation sequences beforehand, nor to pre‐compute a dense set of pre‐rendered views for each animation frame. Each character is encoded through a small collection of textured boxes storing colour and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone and a fragment shader is used to recover the original geometry using a dual‐depth version of relief mapping. Unlike competing output‐sensitive approaches, our compact representation is able to recover high‐frequency surface details and reproduces view‐motion parallax effectively. Our approach drastically reduces both the number of primitives being drawn and the number of bones influencing each primitive, at the expense of a very slight per‐fragment overhead. We show that, beyond a certain distance threshold, our compact representation is much faster to render than traditional level‐of‐detail triangle meshes. Our user study demonstrates that replacing polygonal geometry by our impostors produces negligible visual artefacts.  相似文献   

9.
Articulated character animation is typically performed by manually creating and rigging a skeleton into an unfolded 3D object. However, such tasks are not trivial, as they require a substantial amount of training and practices. Although automatic skeleton extraction methods have been proposed, they generally may not guarantee that the resulting skeleton can help produce desired animations according to user intention. In this paper, we present a sketching-based skeleton generation method suitable for use in the mobile environment. This method takes user sketching as an input, and based on the mesh segmentation result of a 3D object, it estimates a skeleton for articulated character animation. In addition, we are currently developing a Web-based mobile platform to support mesh editing by a group of collaborative users and we depict the system architecture of such a platform. Results show that our method can produce better skeletons in terms of joint positions and topological structure.  相似文献   

10.
Multi-Resolution Rendering of Complex Animated Scenes   总被引:5,自引:0,他引:5  
  相似文献   

11.
This paper presents a novel modeling system, called B‐Mesh, for generating base meshes of 3D articulated shapes. The user only needs to draw a one‐dimensional skeleton and to specify key balls at the skeletal nodes. The system then automatically generates a quad dominant initial mesh. Further subdivision and evolution are performed to refine the initial mesh and generate a quad mesh which has good edge flow along the skeleton directions. The user can also modify and manipulate the shape by editing the skeleton and the key balls and can easily compose new shapes by cutting and pasting existing models in our system. The mesh models generated in our system greatly benefit the sculpting operators for sculpting modeling and skeleton‐based animation.  相似文献   

12.
In this paper we present a new character animation technique in which the animation adapts itself based on the change in the user’s perspective, so that when the user moves and their point of viewing the animation changes, then the character animation adapts itself in response to that change. The resulting animation, generated in real-time, is a blend of key animations provided a priori by the animator. The blending is done with the help of efficient dual-quaternion transformation blending. The user’s point of view is tracked using either computer vision techniques or a simple user-controlled input modality, such as mouse-based input. This tracked point of view is then used to suitably select the blend of animations. We show a way to author and use such animations in both virtual as well as augmented reality scenarios and demonstrate that it significantly heightens the sense of presence for the users when they interact with such self adaptive animations of virtual characters.  相似文献   

13.
This article presents the properties of animation with space-time objects. A space-time object means here a geometrical object embedded in R4 with a volumic topology. Resulting animations are obtained by deforming space-time objects with a free-form deformation model. In this way topological modifications, such as disconnecting or hole making, as well as classical geometrical modifications, can be created in an animated object.  相似文献   

14.
This paper presents a method that can convert a given 3D mesh into a flat‐foldable model consisting of rigid panels. A previous work proposed a method to assist manual design of a single component of such flat‐foldable model, consisting of vertically‐connected side panels as well as horizontal top and bottom panels. Our method semi‐automatically generates a more complicated model that approximates the input mesh with multiple convex components. The user specifies the folding direction of each convex component and the fidelity of shape approximation. Given the user inputs, our method optimizes shapes and positions of panels of each convex component in order to make the whole model flat‐foldable. The user can check a folding animation of the output model. We demonstrate the effectiveness of our method by fabricating physical paper prototypes of flat‐foldable models.  相似文献   

15.
In this paper, we present a novel exemplar‐based technique for the interpolation between two textures that combines patch‐based and statistical approaches. Motivated by the notion of texture as a largely local phenomenon, we warp and blend small image neighborhoods prior to patch‐based texture synthesis. In addition, interpolating and enforcing characteristic image statistics faithfully handles high frequency detail. We are able to create both intermediate textures as well as continuous transitions. In contrast to previous techniques computing a global morphing transformation on the entire input exemplar images, our localized and patch‐based approach allows us to successfully interpolate between textures with considerable differences in feature topology for which no smooth global warping field exists.  相似文献   

16.
Geometry mesh introduces user control into texture synthesis and editing, and brings more variations in the synthesized results. But still two problems related remain in need of better solutions. One problem is generating the meshes with desired size and pattern efficiently from easier user inputs. The other problem is improving the quality of synthesized results with mesh information. We present a new two-step texture design and synthesis method that addresses these two problems. Besides example texture, a small piece of mesh sketch drawn by hand or detected from example texture is input to our algorithm. And then a mesh synthesis method of geometry space is provided to avoid optimizations cell by cell. Distance and orientation features are introduced to improve the quality of mesh rasterization. Results show that with our method, users can design and synthesize textures from mesh sketches easily and interactively.  相似文献   

17.
Current techniques for generating animated scenes involve either videos (whose resolution is limited) or a single image (which requires a significant amount of user interaction). In this paper, we describe a system that allows the user to quickly and easily produce a compelling-looking animation from a small collection of high resolution stills. Our system has two unique features. First, it applies an automatic partial temporal order recovery algorithm to the stills in order to approximate the original scene dynamics. The output sequence is subsequently extracted using a second-order Markov Chain model. Second, a region with large motion variation can be automatically decomposed into semiautonomous regions such that their temporal orderings are softly constrained. This is to ensure motion smoothness throughout the original region. The final animation is obtained by frame interpolation and feathering. Our system also provides a simple-to-use interface to help the user to fine-tune the motion of the animated scene. Using our system, an animated scene can be generated in minutes. We show results for a variety of scenes.  相似文献   

18.
用户控制的纹理合成   总被引:9,自引:3,他引:9  
提出一种基于用户控制的纹理合成算法.该算法适用于任意二维平面和任意拓扑的三维网格.可方便地控制纹理合成时方向和尺度的连续变化.对于任意平面区域需剖分成较均匀的三角网格,以剖分得到的二角形作为基本的合成单元来进行合成.根据用户在此三角网格上指定表示纹理方向和大小的矢量来插值生成矢量场,用以控制合成纹理的变化.该算法可以自然扩展到三维三角网格,以三角面片作为合成单元,合成后直接输出每个顶点的纹理坐标.该算法对二维和三维纹理合成给出了统一实现的框架.实验结果表明,该算法可以在任意目标区域根据用户的交互生成令人满意的纹理合成效果.  相似文献   

19.
Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs. The MGDMs are automatically learned from the motion capture data in a fast and unsupervised process. When the character is animated or posed, a Gaussian process synthesizes a new MGDM for each different vector of target positions, and the corresponding objective function is solved with Jacobian‐based IK. This makes our method practical to use and easy to insert into pre‐existing animation pipelines. Compared with previous works, our method is more stable and more precise, while also satisfying the anatomical constraints of human limbs. Our method leads to natural and realistic results without sacrificing real‐time performance.  相似文献   

20.
The goal of texture synthesis is to generate an arbitrarily large high‐quality texture from a small input sample. Generally, it is assumed that the input image is given as a flat, square piece of texture, thus it has to be carefully prepared from a picture taken under ideal conditions. Instead we would like to extract the input texture from any surface from within an arbitrary photograph. This introduces several challenges: Only parts of the photograph are covered with the texture of interest, perspective and scene geometry introduce distortions, and the texture is non‐uniformly sampled during the capture process. This breaks many of the assumptions used for synthesis. In this paper we combine a simple novel user interface with a generic per‐pixel synthesis algorithm to achieve high‐quality synthesis from a photograph. Our interface lets the user locally describe the geometry supporting the textures by combining rational Bézier patches. These are particularly well suited to describe curved surfaces under projection. Further, we extend per‐pixel synthesis to account for arbitrary texture sparsity and distortion, both in the input image and in the synthesis output. Applications range from synthesizing textures directly from photographs to high‐quality texture completion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号