首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce the concept of 4D model flow for the precomputed alignment of dynamic surface appearance across 4D video sequences of different motions reconstructed from multi‐view video. Precomputed 4D model flow allows the efficient parametrization of surface appearance from the captured videos, which enables efficient real‐time rendering of interpolated 4D video sequences whilst accurately reproducing visual dynamics, even when using a coarse underlying geometry. We estimate the 4D model flow using an image‐based approach that is guided by available geometry proxies. We propose a novel representation in surface texture space for efficient storage and online parametric interpolation of dynamic appearance. Our 4D model flow overcomes previous requirements for computationally expensive online optical flow computation for data‐driven alignment of dynamic surface appearance by precomputing the appearance alignment. This leads to an efficient rendering technique that enables the online interpolation between 4D videos in real time, from arbitrary viewpoints and with visual quality comparable to the state of the art.  相似文献   

2.
We present an algorithm for acquiring the 3D surface geometry and motion of a dynamic piecewise‐rigid object using a single depth video camera. The algorithm identifies and tracks the rigid components in each frame, while accumulating the geometric information acquired over time, possibly from different viewpoints. The algorithm also reconstructs the dynamic skeleton of the object, thus can be used for markerless motion capture. The acquired model can then be animated to novel poses. We show the results of the algorithm applied to synthetic and real depth video.  相似文献   

3.
We present a technique for coupling simulated fluid phenomena that interact with real dynamic scenes captured as a binocular video sequence. We first process the binocular video sequence to obtain a complete 3D reconstruction of the scene, including velocity information. We use stereo for the visible parts of 3D geometry and surface completion to fill the missing regions. We then perform fluid simulation within a 3D domain that contains the object, enabling one‐way coupling from the video to the fluid. In order to maintain temporal consistency of the reconstructed scene and the animated fluid across frames, we develop a geometry tracking algorithm that combines optic flow and depth information with a novel technique for “velocity completion”. The velocity completion technique uses local rigidity constraints to hypothesize a motion field for the entire 3D shape, which is then used to propagate and filter the reconstructed shape over time. This approach not only generates smoothly varying geometry across time, but also simultaneously provides the necessary boundary conditions for one‐way coupling between the dynamic geometry and the simulated fluid. Finally, we employ a GPU based scheme for rendering the synthetic fluid in the real video, taking refraction and scene texture into account.  相似文献   

4.
Physical simulation has long been the approach of choice for generating realistic hair animations in CG. A constant drawback of simulation, however, is the necessity to manually set the physical parameters of the simulation model in order to get the desired dynamic behavior. To alleviate this, researchers have begun to explore methods for reconstructing hair from the real world and even to estimate the corresponding simulation parameters through the process of inversion. So far, however, these methods have had limited applicability, because dynamic hair capture can only be played back without the ability to edit, and solving for simulation parameters can only be accomplished for static hairstyles, ignoring the dynamic behavior. We present the first method for capturing dynamic hair and automatically determining the physical properties for simulating the observed hairstyle in motion. Since our dynamic inversion is agnostic to the simulation model, the proposed method applies to virtually any hair simulation technique, which we demonstrate using two state‐of‐the‐art hair simulation models. The output of our method is a fully simulation‐ready hairstyle, consisting of both the static hair geometry as well as its physical properties. The hairstyle can be easily edited by adding additional external forces, changing the head motion, or re‐simulating in completely different environments, all while remaining faithful to the captured hairstyle.  相似文献   

5.
We propose an approach for temporally coherent patch‐based texture synthesis on the free surface of fluids. Our approach is applied as a post‐process, using the surface and velocity field from any fluid simulator. We apply the texture from the exemplar through multiple local mesh patches fitted to the surface and mapped to the exemplar. Our patches are constructed from the fluid free surface by taking a subsection of the free surface mesh. As such, they are initially very well adapted to the fluid's surface, and can later deform according to the free surface velocity field, allowing a greater ability to represent surface motion than rigid or 2D grid‐based patches. From one frame to the next, the patch centers and surrounding patch vertices are advected according to the velocity field. We seek to maintain a Poisson disk distribution of patches, and following advection, the Poisson disk criterion determines where to add new patches and which patches should e flagged for removal. The removal considers the local number of patches: in regions containing too many patches, we accelerate the temporal removal. This reduces the number of patches while still meeting the Poisson disk criterion. Reducing areas with too many patches speeds up the computation and avoids patch‐blending artifacts. The final step of our approach creates the overall texture in an atlas where each texel is computed from the patches using a contrast‐preserving blending function. Our tests show that the approach works well on free surfaces undergoing significant deformation and topological changes. Furthermore, we show that our approach provides good results for many fluid simulation scenarios, and with many texture exemplars. We also confirm that the optical flow from the resulting texture matches the fluid velocity field. Overall, our approach compares favorably against recent work in this area.  相似文献   

6.
We propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB‐D camera, without the need of a multiple‐camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB‐D image sequence, and geometric warping fields are found using a state‐of‐the‐art non‐rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi‐scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters.  相似文献   

7.
基于纹理约束和参数化运动模型的光流估计   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于局部小平面运动的光流估计新方法。目的是获得精确致密的光流估计结果。与以往采用亮度一致性区域作为假设平面的算法不同,本算法利用序列图像的纹理信息,在纹理分割区域的基础上,进行运动估计。该算法首先通过微分法计算粗光流,可以得到参数化光流模型的初始估计,然后通过区域迭代算法,调整初始估计,从而得到精细的平面分割及其对应的参数化光流模型。基于纹理信息的部分拟合算法被用于算法的每一步当中,保证了纹理边缘位置的光流估计值的准确性。实验采用了标准图像序列,结果表明,可以得到更为精细的光流估计结果,特别是对于那些有着丰富纹理信息的室外环境的图像序列,而且在运动边界处的结果改善尤为明显。  相似文献   

8.
In this paper, we present a progressive compression algorithm for textured surface meshes, which is able to handle polygonal non‐manifold meshes as well as discontinuities in the texture mapping. Our method applies iterative batched simplifications, which create high quality levels of detail by preserving both the geometry and the texture mapping. The main features of our algorithm are (1) generic edge collapse and vertex split operators suited for polygonal non‐manifold meshes with arbitrary texture seam configurations, and (2) novel geometry‐driven prediction schemes and entropy reduction techniques for efficient encoding of connectivity and texture mapping. To our knowledge, our method is the first progressive algorithm to handle polygonal non‐manifold models. For geometry and connectivity encoding of triangular manifolds and non‐manifolds, our method is competitive with state‐of‐the‐art and even better at low/medium bitrates. Moreover, our method allows progressive encoding of texture coordinates with texture seams; it outperforms state‐of‐the‐art approaches for texture coordinate encoding. We also present a bit‐allocation framework which multiplexes mesh and texture refinement data using a perceptually‐based image metric, in order to optimize the quality of levels of detail.  相似文献   

9.
This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head’s motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.  相似文献   

10.
4D Reconstruction of Blooming Flowers   总被引:1,自引:0,他引:1       下载免费PDF全文
Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species. In addition, we provide comparisons with state‐of‐the‐art physical simulation‐based approaches and evaluate our approach by using photos of captured real flowers.  相似文献   

11.
The goal of texture synthesis is to generate an arbitrarily large high‐quality texture from a small input sample. Generally, it is assumed that the input image is given as a flat, square piece of texture, thus it has to be carefully prepared from a picture taken under ideal conditions. Instead we would like to extract the input texture from any surface from within an arbitrary photograph. This introduces several challenges: Only parts of the photograph are covered with the texture of interest, perspective and scene geometry introduce distortions, and the texture is non‐uniformly sampled during the capture process. This breaks many of the assumptions used for synthesis. In this paper we combine a simple novel user interface with a generic per‐pixel synthesis algorithm to achieve high‐quality synthesis from a photograph. Our interface lets the user locally describe the geometry supporting the textures by combining rational Bézier patches. These are particularly well suited to describe curved surfaces under projection. Further, we extend per‐pixel synthesis to account for arbitrary texture sparsity and distortion, both in the input image and in the synthesis output. Applications range from synthesizing textures directly from photographs to high‐quality texture completion.  相似文献   

12.
基于相对形变模型及正则化技术的人体运动估计   总被引:1,自引:0,他引:1       下载免费PDF全文
为了使根据人体行走的单目动态图象序列,对人体手臂及腿部的运动及结构参数进行估计的结果更为可信、更具鲁棒性,提出了一种基于相对形变模型及正则化技术的人体运动估计方法,该方法首先在物体中心坐标的运动表示方式下,通过在刚体运动模型中加入形变系数的方法给出了基于相对形变概念的非刚体运动模型;然后,根据这一非刚体运动模型进行正则化运动及结构参数的估计,再以正则化的形式融入人体运动的先验知识,使运动估计的结果更具鲁棒性,实验结果证明,该方法有效地反映了人体的非刚体运动模式,运动模型中所加入的相对形变系数也一定程度反映了人体的运动规律。  相似文献   

13.
Humans use their bodies in a highly expressive way during conversation, and animated characters that lack this form of non‐verbal expression can seem stiff and unemotional. An important aspect of non‐verbal expression is that people respond to each other's behavior and are highly attuned to picking up this type of response. This is particularly important for the feedback given while listening to some one speak. However, automatically generating this type of behavior is difficult as it is highly complex and subtle. This paper takes a data driven approach to generating interactive social behavior. Listening behavior is motion captured, together with the audio being listened to. These data are used to learn an animation model of the responses of one person to the other. This allows us to create characters that respond in real‐time during a conversation with a real human. Copyright ? 2008 John Wiley & Sons, Ltd.  相似文献   

14.
We present a novel method to reconstruct a fluid's 3D density and motion based on just a single sequence of images. This is rendered possible by using powerful physical priors for this strongly under‐determined problem. More specifically, we propose a novel strategy to infer density updates strongly coupled to previous and current estimates of the flow motion. Additionally, we employ an accurate discretization and depth‐based regularizers to compute stable solutions. Using only one view for the reconstruction reduces the complexity of the capturing setup drastically and could even allow for online video databases or smart‐phone videos as inputs. The reconstructed 3D velocity can then be flexibly utilized, e.g., for re‐simulation, domain modification or guiding purposes. We will demonstrate the capacity of our method with a series of synthetic test cases and the reconstruction of real smoke plumes captured with a Raspberry Pi camera.  相似文献   

15.
In vision and graphics, advanced object models require not only 3D shape, but also surface detail. While several scanning devices exist to capture the global shape of an object, few methods concentrate on capturing the fine-scale detail. Fine-scale surface geometry (relief texture), such as surface markings, roughness, and imprints, is essential in highly realistic rendering and accurate prediction. We present a novel approach for measuring the relief texture of specular or partially specular surfaces using a specialized imaging device with a concave parabolic mirror to view multiple angles in a single image. Laser scanning typically fails for specular surfaces because of light scattering, but our method is explicitly designed for specular surfaces. Also, the spatial resolution of the measured geometry is significantly higher than standard methods, so very small surface details are captured. Furthermore, spatially varying reflectance is measured simultaneously, i.e., both texture color and texture shape are retrieved.  相似文献   

16.
Shape correspondence is a fundamental problem in computer graphics and vision, with applications in various problems including animation, texture mapping, robotic vision, medical imaging, archaeology and many more. In settings where the shapes are allowed to undergo non‐rigid deformations and only partial views are available, the problem becomes very challenging. To this end, we present a non‐rigid multi‐part shape matching algorithm. We assume to be given a reference shape and its multiple parts undergoing a non‐rigid deformation. Each of these query parts can be additionally contaminated by clutter, may overlap with other parts, and there might be missing parts or redundant ones. Our method simultaneously solves for the segmentation of the reference model, and for a dense correspondence to (subsets of) the parts. Experimental results on synthetic as well as real scans demonstrate the effectiveness of our method in dealing with this challenging matching scenario.  相似文献   

17.
Understanding how fuel sloshes in a fuel cell, as a vehicle races around a circuit, is an important but mostly unexplored factor when designing fuel containment systems. Cell designs are based on knowledge of how liquids slosh in other containers, with the design and placement of structures, such as weirs, based on engineering judgement.This work aims to provide better understanding for this difficult problem with a view to improve future designs. A Graphics Processing Unit (GPU) based Smoothed Particle Hydrodynamics (SPH) model is presented to simulate the fuel sloshing problem, with results from a simplified and real fuel cell geometry shown and compared against real data recorded in a vehicle. The vehicle motion and accelerations are included in the SPH simulations using a body force within the momentum equation. Results show good agreement between the simulation and the real fuel movement, with bulk motion captured well for accelerations up to 5 times gravity.Focus is placed on the practicality of the method for use as part of an industrial design process, therefore the amount of time needed to compute results is considered throughout. Computational performance is found to be within acceptable limits, while numerical accuracy is actively considered through the use of Kahan compensated summation. It is concluded that the model is successful in capturing the necessary fluid dynamics for it to be useful in fuel cell design. It is expected that the method will provide insight into current cell designs and highlight where improvements can be made.  相似文献   

18.
We present a system for recording a live dynamic facial performance, capturing highly detailed geometry and spatially varying diffuse and specular reflectance information for each frame of the performance. The result is a reproduction of the performance that can be rendered from novel viewpoints and novel lighting conditions, achieving photorealistic integration into any virtual environment. Dynamic performances are captured directly, without the need for any template geometry or static geometry scans, and processing is completely automatic, requiring no human input or guidance. Our key contributions are a heuristic for estimating facial reflectance information from gradient illumination photographs, and a geometry optimization framework that maximizes a principled likelihood function combining multi‐view stereo correspondence and photometric stereo, using multi‐resolution belief propagation. The output of our system is a sequence of geometries and reflectance maps, suitable for rendering in off‐the‐shelf software. We show results from our system rendered under novel viewpoints and lighting conditions, and validate our results by demonstrating a close match to ground truth photographs.  相似文献   

19.
We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or GoPro) cameras.  相似文献   

20.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号