首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Conventional 2D or 3D texturing methods do not permit an efficient simulation of highly complex structured textures like fire, fur, cotton, etc. More recent techniques, using specific kinds of 3D textures, such as hypertextures or texels based on volume rendering algorithms, are more interesting, for the simulation of such special types of textures. Unfortunately, these techniques remain still restricted because either they need a functional modelling of the object, as it is the case of hypertextures, or they are strictly limited to one specific kind of texture, as it is the case of texels. In this paper we present a new approach for applying a wide range of very different types of highly complex structured textures (fur, fire, water drops, cotton, fume, …) on every kind of objects. This method is particularly based on the geometrical information given by a geometrical model (as the polyhedral or CSG modelling). Like hypertextures or texels, our method uses the volume density rendering, but it isfree of the serious above mentioned restrictions of these methods. In addition, it allows an easy and very intuitive control of the global geometrical shape of generated textures. Its manipulation is simple evenfor a novice user.  相似文献   

2.
This paper presents an online personalised non‐photorealistic rendering (NPR) technique for 3D models generated from interactively sketched input. This technique has been integrated into a sketch‐based modelling system. It lets users interact with computers by drawing naturally, without specifying the number, order, or direction of strokes. After sketches are interpreted as 3D objects, they can be rendered with personalised drawing styles so that the reconstructed 3D model can be presented in a sketchy style similar in appearance to what have been drawn for the 3D model. This technique captures the user's drawing style without using template or prior knowledge of the sketching style. The personalised rendering style can be applied to both visible and initially invisible geometry. The rendering strokes are intelligently selected from the input sketches and mapped to edges of the 3D object. In addition, non‐geometric information such as surface textures can be added to the recognised object in different sketching modes. This will integrate sketch‐based incremental 3D modelling and NPR into conceptual design.  相似文献   

3.
Light fields were introduced a decade ago as a new high‐dimensional graphics rendering model. However, they have not been thoroughly used because their applications are very specific and their storage requirements are too high. Recently, spatial imaging devices have been related to light fields. These devices allow several users to see three‐dimensional (3D) images without using glasses or other intrusive elements. This paper presents a light‐field model that can be rendered in an autostereoscopic spatial device. The model is viewpoint‐independent and supports continuous multiresolution, foveal rendering, and integrating multiple light fields and geometric models in the same scene. We also show that it is possible to examine interactively a scene composed of several light fields and geometric models. Visibility is taken care of by the algorithm. Our goal is to apply our models to 3D TV and spatial imaging.  相似文献   

4.
This paper presents a novel progressive modelling algorithm for 3D models to generate progressive meshes. We propose a forest clustering simplification method to generate a progressive mesh of a model with the efficient and smooth transitions between meshes at different resolutions. Our approach can also integrate and balance the appearance attributes to preserve features of a model in the simplification process. We have applied our progressive modelling technique to several different kinds of input models and results show that our approach only generates efficient and smooth progressive meshes of a given model, but also preserves the features. The proposed method is very suitable for progressive transmission and real‐time rendering of 3D models in networked virtual environments. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
6.
Many graphical objects can be represented by swept volumes (including its subset — generalised cylinders) by sweeping 2D or 3D templates along 3D trajectories. In this paper, we present a new approach for constructing swept volumes using image templates. We utilise scalar fields as our underlying data type, and employ volume ray casting techniques for rendering swept volumes in their original sweeping specifications as well as in their voxelised approximations. In addition to some simple image‐swept volumes, we also treat multi‐channel image templates, video templates, generalised sweeps, and self‐intersecting trajectories. This approach enables us to model swept volumes with heterogeneous interiors and amorphous effects. It also facilitates the use of constructive volume geometry for creating complex scenes in both modelling and rendering space.  相似文献   

7.
BRDF Measurement Modelling using Wavelets for Efficient Path Tracing   总被引:1,自引:0,他引:1  
Physically based rendering needs numerical models from real measurements, or analytical models from material definitions, of the Bidirectional Reflectance Distribution Function (BRDF). However, measured BRDF data sets are too large and provide no functionalities to be practically used in Monte Carlo path tracing algorithms. In this paper, we present a wavelet‐based generic BRDF model suitable for both physical analysis and path tracing. The model is based on the separation of spectral and geometrical aspect of the BRDF and allows a compact and efficient representation of isotropic, anisotropic and/or spectral BRDFs. After a brief survey of BRDF and wavelet theory, we present our software architecture for generic wavelet transform and how to use it to model BRDFs. Then, modelling results are presented on real and virtual BRDF measurements. Finally, we show how to exploit the multiresolution property of the wavelet encoding to reduce the variance by importance sampling in a path tracing algorithm. ACM CSS: I.3.7 Computer Graphics—Three‐Dimensional Graphics and Realism  相似文献   

8.
We present a Hybrid Geometric‐Image Based Rendering (HGIBR) system for displaying very complex geometrical models at interactive frame rates. Our approach replaces distant geometry with a combination of image‐based representations and geometry, while rendering nearby objects from geometry. Reference images are computed on demand, which means that no pre‐processing, or additional storage are necessary. We present results for a massive model of a whole offshore gas platform, to demonstrate that interactive frame rates can be maintained using the HGIBR approach. Our implementation runs on a pair of PCs, using commodity graphics hardware for fast 3D warping.  相似文献   

9.
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak.  相似文献   

10.
A perspective image represents the spatial relationships of objects in a scene as they appear from a single viewpoint. In contrast, a multi‐perspective image combines what is seen from several viewpoints into a single image. Despite their incongruity of view, effective multi‐perspective images are able to preserve spatial coherence and can depict, within a single context, details of a scene that are simultaneously inaccessible from a single view, yet easily interpretable by a viewer. In computer vision, multi‐perspective images have been used for analysing structure revealed via motion and generating panoramic images with a wide field‐of‐view using mirrors. In this STAR, we provide a practical guide on topics in multi‐perspective modelling and rendering methods and multi‐perspective imaging systems. We start with a brief review of multi‐perspective image techniques frequently employed by artists such as the visual paradoxes of Escher, the Cubism of Picasso and Braque and multi‐perspective panoramas in cel animations. We then characterize existing multi‐perspective camera models, with an emphasis on their underlying geometry and image properties. We demonstrate how to use these camera models for creating specific multi‐perspective rendering effects. Furthermore, we show that many of these cameras satisfy the multi‐perspective stereo constraints and we demonstrate several multi‐perspective imaging systems for extracting 3D geometry for computer vision. The participants learn about topics in multi‐perspective modelling and rendering for generating compelling pictures for computer graphics and in multi‐perspective imaging for extracting 3D geometry for computer vision. We hope to provide enough fundamentals to satisfy the technical specialist without intimidating curious digital artists interested in multi‐perspective images. The intended audience includes digital artists, photographers and computer graphics and computer vision researchers using or building multi‐perspective cameras. They will learn about multi‐perspective modelling and rendering, along with many real world multi‐perspective imaging systems.  相似文献   

11.
12.
Superior human pose and shape reconstruction from monocular images depends on removing the ambiguities caused by occlusions and shape variance. Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth. However, 3D ground truth is neither in abundance nor can efficiently be obtained. In this paper, we introduce body part segmentation as critical supervision. Part segmentation not only indicates the shape of each body part but helps to infer the occlusions among parts as well. To improve the reconstruction with part segmentation, we propose a part-level differentiable renderer that enables part-based models to be supervised by part segmentation in neural networks or optimization loops. We also introduce a general parametric model engaged in the rendering pipeline as an intermediate representation between skeletons and detailed shapes, which consists of primitive geometries for better interpretability. The proposed approach combines parameter regression, body model optimization, and detailed model registration altogether. Experimental results demonstrate that the proposed method achieves balanced evaluation on pose and shape, and outperforms the state-of-the-art approaches on Human3.6M, UP-3D and LSP datasets.  相似文献   

13.
A Few Good Lines: Suggestive Drawing of 3D Models   总被引:2,自引:0,他引:2  
We present a method for rendering 3D models in the traditionalline‐drawing style used in artistic and scientificillustrations. The goal is to suggest the 3D shape of the objectsusing a small number of lines drawn with carefullychosen line qualities. The system combines several known techniquesinto a simple yet effective non‐photorealisticline renderer. Feature edges related to the outline and interiorof a given 3D mesh are extracted, segmented, andsmoothed, yielding chains of lines with varying path, length, thickness,gaps, and enclosures. The paper includessample renderings obtained for a variety of models.  相似文献   

14.
While existing spherical parameterization algorithms are limited to genus‐0 geometrical models, we believe a wide class of models of arbitrary genus can also benefit from the spherical domain. We present a complete and robust pipeline that can generate spherical geometry images from arbitrary genus surfaces where the holes are explicitly represented. The geometrical model, represented as a triangle mesh, is first made topologically equivalent to a sphere by cutting each hole along its generators, thus performing genus reduction. The resulting genus‐0 model is then parameterized on the sphere, where it is resampled in a way to preserve connectivity between holes and to reduce the visual impact of seams due to these holes. Knowing the location of each pair of boundary components in parametric space, our novel sampling scheme can automatically choose to scale down or completely eliminate the associated hole, depending on geometry image resolution, thus lowering the genus of the reconstructed model. We found our method to scale better than other geometry image algorithms for higher genus models. We illustrate our approach on remeshing, level‐of‐detail rendering, normal mapping and topology editing.  相似文献   

15.
Designing 3D objects from scratch is difficult, especially when the user intent is fuzzy and lacks a clear target form. We facilitate design by providing reference and inspiration from existing model contexts. We rethink model design as navigating through different possible combinations of part assemblies based on a large collection of pre‐segmented 3D models. We propose an interactive sketch‐to‐design system, where the user sketches prominent features of parts to combine. The sketched strokes are analysed individually, and more importantly, in context with the other parts to generate relevant shape suggestions via adesign galleryinterface. As a modelling session progresses and more parts get selected, contextual cues become increasingly dominant, and the model quickly converges to a final form. As a key enabler, we use pre‐learned part‐based contextual information to allow the user to quickly explore different combinations of parts. Our experiments demonstrate the effectiveness of our approach for efficiently designing new variations from existing shape collections.  相似文献   

16.
In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any pre‐computation, thus allowing interactive explorations of volumetric data sets via on‐the‐fly editing of the shading model parameters or (multi‐dimensional) transfer functions.  相似文献   

17.
Hypertexturing can be a powerful way of adding rich geometric details to surfaces at low memory cost by using a procedural three‐dimensional (3D) space distortion. However, this special kind of texturing technique still raises a major problem: the efficient control of the visual result. In this paper, we introduce a framework for interactive hypertexture modelling. This framework is based on two contributions. First, we propose a reformulation of the density modulation function. Our density modulation is based on the notion of shape transfer function. This function, which can be easily edited by users, allows us to control in an intuitive way the visual appearance of the geometric details resulting from the space distortion. Second, we propose to use a hybrid surface and volume‐point‐based representation in order to be able to dynamically hypertexture arbitrary objects at interactive frame rates. The rendering consists in a combined splat‐ and raycasting‐based direct volume rendering technique. The splats are used to model the volumetric object while raycasting allows us to add the details. An experimental study on users shows that our approach improves the design of hypertextures and yet preserves their procedural nature.  相似文献   

18.
Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.  相似文献   

19.
Efficient physically accurate modeling and rendering of woven cloth at a yarn level is an inherently complicated task due to the underlying geometrical and optical complexity. In this paper, a novel and general approach to physically accurate cloth rendering is presented. By using a statistical volumetric model approximating the distribution of yarn fibers, a prohibitively costly explicit geometrical representation is avoided. As a result, accurate rendering of even large pieces of fabrics containing orders of magnitudes more fibers becomes practical without sacrifying much generality compared to fiber‐based techniques. By employing the concept of local visibility and introducing the effective fiber density, limitations of existing volumetric approaches regarding self‐shadowing and fiber density estimation are greatly reduced.  相似文献   

20.
Mesh geometry can be used to model both object shape and details. If texture maps are involved, it is common to let mesh geometry mainly model object shapes and let the texture maps model the most object details, optimising data size and complexity of an object. To support efficient object rendering and transmission, model simplification can be applied to reduce the modelling data. However, existing methods do not well consider how object features are jointly represented by mesh geometry and texture maps, having problems in identifying and preserving important features for simplified objects. To address this, we propose a visual saliency detection method for simplifying textured 3D models. We produce good simplification results by jointly processing mesh geometry and texture map to produce a unified saliency map for identifying visually important object features. Results show that our method offers a better object rendering quality than existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号