首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

2.
In this paper, we propose a new method that processes multiple synchronized video sequences and generates 3D rendering of dynamic objects in the video. It exploits an efficient image‐based reconstruction scheme that constructs and shades 3D models of objects from silhouette images by combining image‐based visual hull and view morphing. The proposed hybrid method improves the speed and the quality of the previous visual hull sampling methods. We designed and implemented a system based on this method which is relatively low cost and does not require any special hardware or specific environment. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

3.
We present a technique for coupling simulated fluid phenomena that interact with real dynamic scenes captured as a binocular video sequence. We first process the binocular video sequence to obtain a complete 3D reconstruction of the scene, including velocity information. We use stereo for the visible parts of 3D geometry and surface completion to fill the missing regions. We then perform fluid simulation within a 3D domain that contains the object, enabling one‐way coupling from the video to the fluid. In order to maintain temporal consistency of the reconstructed scene and the animated fluid across frames, we develop a geometry tracking algorithm that combines optic flow and depth information with a novel technique for “velocity completion”. The velocity completion technique uses local rigidity constraints to hypothesize a motion field for the entire 3D shape, which is then used to propagate and filter the reconstructed shape over time. This approach not only generates smoothly varying geometry across time, but also simultaneously provides the necessary boundary conditions for one‐way coupling between the dynamic geometry and the simulated fluid. Finally, we employ a GPU based scheme for rendering the synthetic fluid in the real video, taking refraction and scene texture into account.  相似文献   

4.
We present a fast and efficient non-rigid shape tracking method for modeling dynamic 3D objects from multiview video. Starting from an initial mesh representation, the shape of a dynamic object is tracked over time, both in geometry and topology, based on multiview silhouette and 3D scene flow information. The mesh representation of each frame is obtained by deforming the mesh representation of the previous frame towards the optimal surface defined by the time-varying multiview silhouette information with the aid of 3D scene flow vectors. The whole time-varying shape is then represented as a mesh sequence which can efficiently be encoded in terms of restructuring and topological operations, and small-scale vertex displacements along with the initial model. The proposed method has the ability to deal with dynamic objects that may undergo non-rigid transformations and topological changes. The time-varying mesh representations of such non-rigid shapes, which are not necessarily of fixed connectivity, can successfully be tracked thanks to restructuring and topological operations employed in our deformation scheme. We demonstrate the performance of the proposed method both on real and synthetic sequences.  相似文献   

5.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

6.
We present the 3D Video Recorder, a system capable of recording, processing, and playing three‐dimensional video from multiple points of view. We first record 2D video streams from several synchronized digital video cameras and store pre‐processed images to disk. An off‐line processing stage converts these images into a time‐varying 3D hierarchical point‐based data structure and stores this 3D video to disk. We show how we can trade‐off 3D video quality with processing performance and devise efficient compression and coding schemes for our novel 3D video representation. A typical sequence is encoded at less than 7 Mbps at a frame rate of 8.5 frames per second. The 3D video player decodes and renders 3D videos from hard‐disk in real‐time, providing interaction features known from common video cassette recorders, like variable‐speed forward and reverse, and slow motion. 3D video playback can be enhanced with novel 3D video effects such as freeze‐and‐rotate and arbitrary scaling. The player builds upon point‐based rendering techniques and is thus capable of rendering high‐quality images in real‐time. Finally, we demonstrate the 3D Video Recorder on multiple real‐life video sequences. ACM CSS: I.3.2 Computer Graphics—Graphics Systems, I.3.5 Computer Graphics—Computational Geometry and Object Modelling, I.3.7 Computer Graphics—Three‐Dimensional Graphics and Realism  相似文献   

7.
The bidirectional texture function (BTF) is a 6D function that describes the appearance of a real-world surface as a function of lighting and viewing directions. The BTF can model the fine-scale shadows, occlusions, and specularities caused by surface mesostructures. In this paper, we present algorithms for efficient synthesis of BTFs on arbitrary surfaces and for hardware-accelerated rendering. For both synthesis and rendering, a main challenge is handling the large amount of data in a BTF sample. To addresses this challenge, we approximate the BTF sample by a small number of 4D point appearance functions (PAFs) multiplied by 2D geometry maps. The geometry maps and PAFs lead to efficient synthesis and fast rendering of BTFs on arbitrary surfaces. For synthesis, a surface BTF can be generated by applying a texton-based sysnthesis algorithm to a small set of 2D geometry maps while leaving the companion 4D PAFs untouched. As for rendering, a surface BTF synthesized using geometry maps is well-suited for leveraging the programmable vertex and pixel shaders on the graphics hardware. We present a real-time BTF rendering algorithm that runs at the speed of about 30 frames/second on a mid-level PC with an ATI Radeon 8500 graphics card. We demonstrate the effectiveness of our synthesis and rendering algorithms using both real and synthetic BTF samples.  相似文献   

8.
In this paper we present a pipeline for rendering dynamic 2D/3D line drawings efficiently. Our main goal is to create efficient static renditions and coherent animations of line drawings in a setting where lines can be added, deleted and arbitrarily transformed on‐the‐fly. Such a dynamic setting enables us to handle interactively sketched 2D line data, as well as arbitrarily transformed 3D line data in a unified manner. We evaluate the proximity of screen projected strokes to simplify them while preserving their continuity. We achieve this by using a special data structure that facilitates efficient proximity calculations in a dynamic setting. This on‐the‐fly proximity evaluation also facilitates generation of appropriate visibility cues to mitigate depth ambiguities and visual clutter for 3D line data. As we perform all these operations using only line data, we can create line drawings from 3D models without any surface information. We demonstrate the effectiveness and applicability of our approach by showing several examples with initial line representations obtained from a variety of sources: 2D and 3D hand‐drawn sketches and 3D salient geometry lines obtained from 3D surface representations.  相似文献   

9.
3D video [IEEE Multimedia (1997) 18] is the ultimate image media recording dynamic visual events in the real world as is; it records time varying 3D object shape with high fidelity surface properties (i.e., color and texture). Its applications cover wide varieties of personal and social human activities: entertainment (e.g., 3D game and 3D TV), education (e.g., 3D animal picture books), sports (e.g., sport performance analysis), medicine (e.g., 3D surgery monitoring), culture (e.g., 3D archive of traditional dances), and so on. In this paper, we propose: (1) a PC cluster system for real-time reconstruction of dynamic 3D object action from multi-view video images, (2) a deformable 3D mesh model for reconstructing the accurate dynamic 3D object shape, and (3) an algorithm of rendering natural-looking texture on the 3D object surface from the multi-view video images. Experimental results with quantitative performance evaluations demonstrate the effectiveness of these methods in generating high fidelity 3D video from multi-view video images.  相似文献   

10.
Real‐time streaming of shape deformations in a shared distributed virtual environment is a challenging task due to the difficulty of transmitting large amounts of 3D animation data to multiple receiving parties at a high frame rate. In this paper, we present a framework for streaming 3D shape deformations, which allows shapes with multi‐resolutions to share the same deformations simultaneously in real time. The geometry and motion of deforming mesh or point‐sampled surfaces are compactly encoded, transmitted, and reconstructed using the spectra of the manifold harmonics. A receiver‐based multi‐resolution surface reconstruction approach is introduced, which allows deforming shapes to switch smoothly between continuous multi‐resolutions. On the basis of this dynamic reconstruction scheme, a frame rate control algorithm is further proposed to achieve rendering at interactive rates. We also demonstrate an efficient interpolation‐based strategy to reduce computing of deformation. The experiments conducted on both mesh and point‐sampled surfaces show that our approach achieves efficient performance even if deformations of complex 3D surfaces are streamed. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
This paper presents an online personalised non‐photorealistic rendering (NPR) technique for 3D models generated from interactively sketched input. This technique has been integrated into a sketch‐based modelling system. It lets users interact with computers by drawing naturally, without specifying the number, order, or direction of strokes. After sketches are interpreted as 3D objects, they can be rendered with personalised drawing styles so that the reconstructed 3D model can be presented in a sketchy style similar in appearance to what have been drawn for the 3D model. This technique captures the user's drawing style without using template or prior knowledge of the sketching style. The personalised rendering style can be applied to both visible and initially invisible geometry. The rendering strokes are intelligently selected from the input sketches and mapped to edges of the 3D object. In addition, non‐geometric information such as surface textures can be added to the recognised object in different sketching modes. This will integrate sketch‐based incremental 3D modelling and NPR into conceptual design.  相似文献   

12.
We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semi-urban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.  相似文献   

13.
Depth and visual hulls are useful for quick reconstruction and rendering of a 3D object based on a number of reference views. However, for many scenes, especially multi‐object, these hulls may contain significant artifacts known as phantom geometry. In depth hulls the phantom geometry appears behind the scene objects in regions occluded from all the reference views. In visual hulls the phantom geometry may also appear in front of the objects because there is not enough information to unambiguously imply the object positions. In this work we identify which parts of the depth and visual hull might constitute phantom geometry. We define the notion of reduced depth hull and reduced visual hull as the parts of the corresponding hull that are phantom‐free. We analyze the role of the depth information in identification of the phantom geometry. Based on this, we provide an algorithm for rendering the reduced depth hull at interactive frame‐rates and suggest an approach for rendering the reduced visual hull. The rendering algorithms take advantage of modern GPU programming techniques. Our techniques bypass explicit reconstruction of the hulls, rendering the reduced depth or visual hull directly from the reference views.  相似文献   

14.
We propose an efficient approach for authoring dynamic and realistic waterfall scenes based on an acquired video sequence. Traditional video based techniques generate new images by synthesizing 2D samples, i.e., texture sprites chosen from a video sequence. However, they are limited to one fixed viewpoint and cannot provide arbitrary walkthrough into 3D scenes. Our approach extends this scheme by synthesizing dynamic 2D texture sprites and projecting them into 3D space. We first generate a set of basis texture sprites, which capture the representative appearance and motions of waterfall scenes contained in the video sequence. To model the shape and motion of a new waterfall scene, we interactively construct a set of flow lines taking account of physical principles. Along each flow line, the basis texture sprites are manipulated and animated dynamically, yielding a sequence of dynamic texture sprites in 3D space. These texture sprites are displayed using the point splatting technique, which can be accelerated efficiently by graphics hardware. By choosing varied basis texture sprites, waterfall scenes with different appearance and shapes can be conveniently simulated. The experimental results demonstrate that our approach achieves realistic effects and real‐time frame rates on consumer PC platforms. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
Our work targets 3D scenes in motion. In this article, we propose a method for view-dependent layered representation of 3D dynamic scenes. Using densely arranged cameras, we've developed a system that can perform processing in real time from image pickup to interactive display, using video sequences instead of static images, at 10 frames per second. In our system, images on layers are view dependent, and we update both the shape and image of each layer in real time. This lets us use the dynamic layers as the coarse structure of the dynamic 3D scenes, which improves the quality of the synthesized images. In this sense, our prototype system may be one of the first full real-time image -based modelling and rendering systems. Our experimental results show that this method is useful for interactive 3D rendering of real scenes  相似文献   

16.
Currently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on‐the‐fly within a client‐server platform. In such scenario, which may involve time‐varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straightforward interface that provides precise motion vectors from the rendering side to the codec and perceptual error thresholds for each pixel in the opposite direction. The perceptual error thresholds take into account bandwidth‐dependent quantization errors resulting from the lossy com‐pression as well as image content‐dependent luminance and spatial contrast masking. The availability of the discrete cosine transform (DCT) coefficients at the codec side enables to use advanced models of the human visual system (HVS) in the perceptual error threshold derivation without incurring any significant cost. Those error thresholds are then used to control the rendering quality and make it well aligned with the compressed stream quality. In our prototype system we use the lightcuts technique developed by Walter et al., which we enhance to handle dynamic image sequences, and an MPEG‐2 implementation. Our results clearly demonstrate many advantages of coupling the rendering with video compression in terms of faster rendering. Furthermore, temporally coherent rendering leads to a reduction of temporal artifacts.  相似文献   

17.
We present an algorithm for acquiring the 3D surface geometry and motion of a dynamic piecewise‐rigid object using a single depth video camera. The algorithm identifies and tracks the rigid components in each frame, while accumulating the geometric information acquired over time, possibly from different viewpoints. The algorithm also reconstructs the dynamic skeleton of the object, thus can be used for markerless motion capture. The acquired model can then be animated to novel poses. We show the results of the algorithm applied to synthetic and real depth video.  相似文献   

18.
In recent years, the convergence of computer vision and computer graphics has put forth a new field of research that focuses on the reconstruction of real-world scenes from video streams. To make immersive 3D video reality, the whole pipeline spanning from scene acquisition over 3D video reconstruction to real-time rendering needs to be researched. In this paper, we describe latest advancements of our system to record, reconstruct and render free-viewpoint videos of human actors. We apply a silhouette-based non-intrusive motion capture algorithm making use of a 3D human body model to estimate the actor’s parameters of motion from multi-view video streams. A renderer plays back the acquired motion sequence in real-time from any arbitrary perspective. Photo-realistic physical appearance of the moving actor is obtained by generating time-varying multi-view textures from video. This work shows how the motion capture sub-system can be enhanced by incorporating texture information from the input video streams into the tracking process. 3D motion fields are reconstructed from optical flow that are used in combination with silhouette matching to estimate pose parameters. We demonstrate that a high visual quality can be achieved with the proposed approach and validate the enhancements caused by the the motion field step.  相似文献   

19.
This paper presents a novel progressive modelling algorithm for 3D models to generate progressive meshes. We propose a forest clustering simplification method to generate a progressive mesh of a model with the efficient and smooth transitions between meshes at different resolutions. Our approach can also integrate and balance the appearance attributes to preserve features of a model in the simplification process. We have applied our progressive modelling technique to several different kinds of input models and results show that our approach only generates efficient and smooth progressive meshes of a given model, but also preserves the features. The proposed method is very suitable for progressive transmission and real‐time rendering of 3D models in networked virtual environments. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
This paper addresses the synthesis of novel views of people from multiple view video. We consider the target area of the multiple camera 3D Virtual Studio for broadcast production with the requirement for free-viewpoint video synthesis for a virtual camera with the same quality as captured video. A framework is introduced for view-dependent optimisation of reconstructed surface shape to align multiple captured images with sub-pixel accuracy for rendering novel views. View-dependent shape optimisation combines multiple view stereo and silhouette constraints to robustly estimate correspondence between images in the presence of visual ambiguities such as uniform surface regions, self-occlusion, and camera calibration error. Free-viewpoint rendering of video sequences of people achieves a visual quality comparable to the captured video images. Experimental evaluation demonstrates that this approach overcomes limitations of previous stereo- and silhouette-based approaches to rendering novel views of moving people.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号