首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose an efficient approach for authoring dynamic and realistic waterfall scenes based on an acquired video sequence. Traditional video based techniques generate new images by synthesizing 2D samples, i.e., texture sprites chosen from a video sequence. However, they are limited to one fixed viewpoint and cannot provide arbitrary walkthrough into 3D scenes. Our approach extends this scheme by synthesizing dynamic 2D texture sprites and projecting them into 3D space. We first generate a set of basis texture sprites, which capture the representative appearance and motions of waterfall scenes contained in the video sequence. To model the shape and motion of a new waterfall scene, we interactively construct a set of flow lines taking account of physical principles. Along each flow line, the basis texture sprites are manipulated and animated dynamically, yielding a sequence of dynamic texture sprites in 3D space. These texture sprites are displayed using the point splatting technique, which can be accelerated efficiently by graphics hardware. By choosing varied basis texture sprites, waterfall scenes with different appearance and shapes can be conveniently simulated. The experimental results demonstrate that our approach achieves realistic effects and real‐time frame rates on consumer PC platforms. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
Aura 3D textures     
This paper presents a new technique, called aura 3D textures, for generating solid textures based on input examples. Our method is fully automatic and requires no user interactions in the process. Given an input texture sample, our method first creates its aura matrix representations and then generates a solid texture by sampling the aura matrices of the input sample constrained in multiple view directions. Once the solid texture is generated, any given object can be textured by the solid texture. We evaluate the results of our method based on extensive user studies. Based on the evaluation results using human subjects, we conclude that our algorithm can generate faithful results of both stochastic and structural textures with an average successful rate of 76.4 percent. Our experimental results also show that the new method outperforms Wei and Levoy's method and is comparable to that proposed by Jagnow et al. (2004)  相似文献   

3.
The synthesis quality is one of the most important aspects in solid texture synthesis algorithms. In recent years several methods are proposed to generate high quality solid textures. However, these existing methods often suffer from the synthesis artifacts such as blurring, missing texture structures, introducing aberrant voxel colors, and so on. In this paper, we introduce a novel algorithm for synthesizing high quality solid textures from 2D exemplars. We first analyze the relevant factors for further improvements of the synthesis quality, and then adopt an optimization framework with the k-coherence search and the discrete solver for solid texture synthesis. The texture optimization approach is integrated with two new kinds of histogram matching methods, position and index histogram matching, which effectively cause the global statistics of the synthesized solid textures to match those of the exemplars. Experimental results show that our algorithm outperforms or at least is comparable to the previous solid texture synthesis algorithms in terms of the synthesis quality.  相似文献   

4.
We present an algorithm based on statistical learning for synthesizing static and time-varying textures matching the appearance of an input texture. Our algorithm is general and automatic and it works well on various types of textures, including 1D sound textures, 2D texture images, and 3D texture movies. The same method is also used to generate 2D texture mixtures that simultaneously capture the appearance of a number of different input textures. In our approach, input textures are treated as sample signals generated by a stochastic process. We first construct a tree representing a hierarchical multiscale transform of the signal using wavelets. From this tree, new random trees are generated by learning and sampling the conditional probabilities of the paths in the original tree. Transformation of these random trees back into signals results in new random textures. In the case of 2D texture synthesis, our algorithm produces results that are generally as good as or better than those produced by previously described methods in this field. For texture mixtures, our results are better and more general than those produced by earlier methods. For texture movies, we present the first algorithm that is able to automatically generate movie clips of dynamic phenomena such as waterfalls, fire flames, a school of jellyfish, a crowd of people, etc. Our results indicate that the proposed technique is effective and robust  相似文献   

5.
We present a method to synthesize solid textures from heterogeneous translucent materials that have a complex pattern and subsurface scattering effect. A solid texture provides consistent texture throughout the volume, so that it can be used to model the texture on an arbitrary geometry. However, solid texture synthesis requires a huge amount of time to generate the volume. Moreover, a synthesized solid texture acquires only the color information from an input exemplar. Therefore, it has been difficult to render the appearance of a translucent object realistically without additional appearance data. In this paper, we introduce a new search method to accelerate synthesizing of solid textures. This method decomposes the candidates in an exemplar into several subgroups and searches for the best similar neighborhood in each decomposed subgroup. We also apply subsurface scattering effects to the shell layer of a synthesized object for realistic rendering of a translucent solid texture. Experimental results show that our rendering method can produce realistic rendering results for various heterogeneous translucent objects. It can also represent cross-sections of an object realistically without reconstructing the texture and surface geometry.  相似文献   

6.
目的 针对目前手持式3维扫描设备生成的模型纹理分辨率不够,且部分区域存在高光、阴影及明暗变化等问题,提出一种基于多幅实拍照片的纹理重建方法。方法 首先使用基于特征匹配的方法将照片图像与几何模型进行配准;其次根据重建纹理大小,采用特殊编码方式的位置纹理建立照片像素到纹理像素直接且精确的对应关系;然后根据多幅使用闪光灯作为光源拍摄的照片,通过位置纹理建立联立方程,求解漫反射分量;最后采用改进的基于混合权重的融合方法对求解的漫反射分量进行纹理融合。结果 使用本文方法对3个实验模型进行本征纹理重建,与3维扫描设备生成纹理和直接用照片生成纹理相比,该方法操作简单、使用方便,可获得高度清晰的,不含高光和明暗效果的本征纹理图像。结论 实验结果表明,重建纹理质量在分辨率、色彩还原性及一致性方面明显优于原有纹理,且该方法具有很高的精确性和鲁棒性,可满足高质量的纹理重建需求。  相似文献   

7.
童晶  关华勇 《计算机应用》2007,27(4):1013-1016
针对影视动画领域,利用LS_5000型三维激光扫描仪,提出了一套真实感三维人脸快速建模算法。只需输入真实演员人脸的三维扫描点云和未定标的照片,以及极少的人工交互,即可生成虚拟演员真实感的三维人脸模型(包括几何模型、纹理模型和面向动画的可变模型)。实验结果表明,算法输出的模型简洁规范,可直接应用于实际的影视动画制作,提高了人脸模型建模效率。  相似文献   

8.
何科雨  陈中贵 《图学学报》2022,43(6):1114-1123
人造的装饰性纹理在人们的生活中得到了广泛地应用。传统的基于实例的纹理生成方法首先会在 目标区域放置一些很小的图元,然后通过迭代的方式让这些图元增长,最后填充整个目标区域。在迭代的过程中 相邻的图元之间会发生交叉与覆盖,因此需要对图元做变形、裁剪或其他处理,然而这种处理方式往往会花费大 量的时间。基于过程化的方法通过设计很多结构复杂的规则,在二维平面生成具有丰富层次的纹理,但这种方法 比较难拓展到三维空间。本文提出了一种基于圆堆砌的纹理生成方法,可以生成二维或三维的纹理。圆堆砌问题 属于 NP-hard 问题,本文将该问题转换成一个最优化问题,从而能够快速近似求解,求解出圆堆砌后就可以在圆 上定义规则来对圆形进行填充或替换以生成纹理。由于采用的是设计规则的方式生成纹理,该方法可以避免图元 之间发生的交叉覆盖的问题。  相似文献   

9.
Anisotropic Solid Texture Synthesis Using Orthogonal 2D Views   总被引:1,自引:0,他引:1  
Analytical approaches, based on digitised 2D texture models, for an automatic solid (3D) texture synthesis have been recently introduced to Computer Graphics. However, these approaches cannot provide satisfactory solutions in the usual case of natural anisotropic textures (wood grain for example). Indeed, solid texture synthesis requires particular care, and sometimes external knowledge to "guess" the internal structure of solid textures because only 2D texture models are used for analysis. By making some basic assumptions about the internal structure of solid textures, we propose a very efficient method based on a hybrid analysis (spectral and histogram) for an automatic synthesis of solid textures. This new method allows us to obtain high precision solid textures (closely resembling initial models) in a large number of cases, including the difficult case of anisotropic textures.  相似文献   

10.
Efficient example-based painting and synthesis of 2D directional texture   总被引:3,自引:0,他引:3  
We present a new method for converting a photo or image to a synthesized painting following the painting style of an example painting. Treating painting styles of brush strokes as sample textures, we reduce the problem of learning an example painting to a texture synthesis problem. The proposed method uses a hierarchical patch-based approach to the synthesis of directional textures. The key features of our method are: 1) Painting styles are represented as one or more blocks of sample textures selected by the user from the example painting; 2) image segmentation and brush stroke directions defined by the medial axis are used to better represent and communicate shapes and objects present in the synthesized painting; 3) image masks and a hierarchy of texture patches are used to efficiently synthesize high-quality directional textures. The synthesis process is further accelerated through texture direction quantization and the use of Gaussian pyramids. Our method has the following advantages: First, the synthesized stroke textures can follow a direction field determined by the shapes of regions to be painted. Second, the method is very efficient; the generation time of a synthesized painting ranges from a few seconds to about one minute, rather than hours, as required by other existing methods, on a commodity PC. Furthermore, the technique presented here provides a new and efficient solution to the problem of synthesizing a 2D directional texture. We use a number of test examples to demonstrate the efficiency of the proposed method and the high quality of results produced by the method.  相似文献   

11.
针对空间上具有各向异性的体纹理,提出一种基于样本的体纹理合成方法,把二维情况下基于二维纹理样本块的粘贴技术扩展至三维,应用到构建好的肝动脉模型。根据用户在四面体网格上定义的张量场,以四面体为合成单元,粘贴各向异性的三维纹理样本块。实验结果与理论分析表明,该方法可以在任意三维区域根据用户的交互合成预期的纹理效果,且能够较好地模拟肝动脉的内部结构,实现切割的实时可视化。  相似文献   

12.
The word “fashion” is synonymous with the word “change”. Fashion begins with fabrics and fabrics begin with colour’. This famous remark/definition of ‘fashion’ must now be revised in the era of digital technology. In this paper, we propose a novel print design concept, from 3D garments to 2D textiles. By taking advantage of the cutting-edge developments in surface parameterisation, cloth simulation and texture assignment, we develop a computer system that allows designers to create figure-flattering prints directly onto 3D garments, and will output 2D pattern pieces with matched texture that are ready for digital printing and garment production. It reverses the traditional design process from 2D fabrics to 3D garments. The results produced by the proposed method guarantee textural continuity in both garment and pattern pieces. It not only releases apparel makers from the tedious work of matching texture along seams, but also provides users with a new tool to create one-of-a-kind fashion products by designing personalised prints.  相似文献   

13.
Difering from common 2D images,a texture map,since it is used to project onto a 3D model in 3D space,not only contains 2D texture information,but also implicitly associates certain 3D geometric information.Related to this,an efective 3D geometry-dependent texture map compression method with hybrid region of interest(ROI)coding is proposed in this paper.We regard the visually important area of the texture map as the ROI.To acquire the visually important areas of the texture map,we take into account information from both the 3D geometry and 2D texture maps,depicting the saliency of the textured model,distortion of the texture mapping,and boundary of the texture atlas.These visually important areas are expressed as a visual importance map.According to the particularity of the texture map,a hybrid ROI coding method that utilizes Max-Shift and an improved post compression rate distortion(PCRD)technique is presented,guided by this visual importance map.To find the exact wavelet coefcients pertaining to these ROIs before carrying out the hybrid ROI coding,this paper proposes a stochastic coefcient priority mask map computational method.Experimental results show that the visually important areas of the texture image have a better visual efect and that a good rendering result can be obtained from the texture mapping.  相似文献   

14.
The use of attribute maps for 3D surfaces is an important issue in geometric modeling, visualization and simulation. Attribute maps describe various properties of a surface that are necessary in applications. In the case of visual properties, such as color, they are also called texture maps. Usually, the attribute representation exploits a parametrization g:U??2→?3 of a surface in order to establish a two-dimensional domain where attributes are defined. However, it is not possible, in general, to find a global parametrization without introducing distortions into the mapping. For this reason, an atlas structure is often employed. The atlas is a set of charts defined by a piecewise parametrization of a surface, which allows local mappings with small distortion. Texture atlas generation can be naturally posed as an optimization problem where the goal is to minimize both the number of charts and the distortion of each mapping. Additionally, specific applications can impose other restrictions, such as the type of mapping. An example is 3D photography, where the texture comes from images of the object captured by a camera [4]. Consequently, the underlying parametrization is a projective mapping. In this work, we investigate the problem of building and manipulating texture atlases for 3D photography applications. We adopt a variational approach to construct an atlas structure with the desired properties. For this purpose, we have extended the method of Cohen–Steiner et al. [6] to handle the texture mapping set-up by minimizing distortion error when creating local charts. We also introduce a new metric tailored to projective maps that is suited to 3D photography.  相似文献   

15.
提出了一种基于双目视觉的三维重建方法,无须昂贵设备和通用模型。从校正后的图像中提取有效人脸重建区域以降低整体耗时。改进了Realtime局部立体匹配算法和基于种子视差约束的区域生长算法,融合Realtime阈值排序和置信度排序进行区域生长,提高了种子像素提取的可靠性和降低区域生长误匹配的可能性。最后,研究了纹理映射技术,提高了重建模型的逼真度。实验结果表明,该方法能够产生逼真光滑的三维人脸模型。  相似文献   

16.
There has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover, the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process. The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.  相似文献   

17.
Texture mapping has been widely used to improve the quality of 3D rendered images. To reduce the storage and bandwidth impact of texture mapping, compression systems are commonly used. To further increase the quality of the rendered images, texture filtering is also often adopted. These two techniques are generally considered to be independent. First, a decompression step is executed to gather texture samples, which is then followed by a separate filtering step. We have investigated a system based on linear transforms that merges both phases together. This allows more efficient decompression and filtering at higher compression ratios. This paper formally presents our approach for any linear transformation, how the commonly used discrete cosine transform can be adapted to this new approach, and how this method can be implemented in real time on current-generation graphics cards using shaders. Through reuse of the existing hardware filtering, fast magnification and minification filtering is achieved. Our implementation provides fully anisotropically filtered samples four to six times faster than an implementation using two separate phases for decompression and filtering. Additionally, our transform-based compression also provides increased and variable compression ratios over standard hardware compression systems at a comparable or better quality level.  相似文献   

18.
在气象领域,三维风暴数据场可视化是风暴监测及灾害预测的重要技术手段之一。可视化效率及质量直接影响到风暴体分析研究的准确性和时效性。针对传统的二维、三维纹理映射体绘制方法进行了研究,提出了一种基于GPU的风暴数据场多维纹理混合绘制技术。该方法采用了三维纹理存储风暴数据场并结合代理几何体动态生成方法,克服了传统方法中纹理数据冗余的问题,并保证了模型的三维交互流畅性。该方法中提出的纹理映射光滑重采样策略,显著提高了风暴体模型显示效果,并在一定程度上避免了CPU-GPU通信瓶颈。  相似文献   

19.
Existing solid texture synthesis algorithms generate a full volume of color content from a set of 2D example images. We introduce a new algorithm with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism. This is especially useful when texturing objects, since only a thick layer around the surface needs to be synthesized. A major difficulty lies in reducing the dependency chain of neighborhood matching, so that each voxel only depends on a small number of other voxels. Our key idea is to synthesize a volume from a set of pre‐computed 3D candidates, each being a triple of interleaved 2D neighborhoods. We present an efficient algorithm to carefully select in a pre‐process only those candidates forming consistent triples. This significantly reduces the search space during subsequent synthesis. The result is a new parallel, spatially deterministic solid texture synthesis algorithm which runs efficiently on the GPU. Our approach generates high resolution solid textures on surfaces within seconds. Memory usage and synthesis time only depend on the output textured surface area. The GPU implementation of our method rapidly synthesizes new textures for the surfaces appearing when interactively breaking or cutting objects.  相似文献   

20.
Uniform texture synthesis and texture mapping using global parameterization   总被引:1,自引:0,他引:1  
Texture mapping and texture synthesis are two popular methods for the decoration of surfaces with visual detail. Here, an existing challenge is to preserve, or at least balance, two competing metrics: scale and angle. In this paper we present two methods for this, both based on global conformal parameterization. First, we describe a texture synthesis algorithm for surfaces with arbitrary topology. By using the conformal parameterization, the 3D surface texture synthesis problem can be converted to a 2D image synthesis problem, which is more intuitive, easier, and conceptually simpler. While the conformality of the parameterization naturally preserves the angles of the texture, in this paper we provide a multi-scale technique to also maintain a more uniform area scaling factor. A second novel contribution is to employ the global parameterization to simultaneously preserve orthogonality and size in texture mapping. For this, we show that a conformal factor-driven mass-spring method offers a convenient way to trade off these two qualitative metrics. Our algorithms are simple, efficient and automatic, and they are theoretically sound and universal to general surfaces as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号