首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
通过对视频纹理定义的分析,将视频纹理合成转化为一个优化组合问题。提出一种应用分段遗传算法的视频纹理合成算法,采用分段遗传算法,对有限长度的源视频进行加工,得到可无限播放的连续视频序列。算法采用更适当的相似性尺度和测量准则,省去了大量复杂的对源视频的预处理,分段的搜索策略只需要用很少的遗传代数即可快速合成出质量很高的视频纹理。与现有的视频纹理合成方法比较,该算法具有较小的计算复杂度,在合成的速度和质量上都有所提高。另外,实验结果给出了种群大小以及最大遗传代数对合成质量和速度的影响。  相似文献   

2.
We introduce the concept of 4D model flow for the precomputed alignment of dynamic surface appearance across 4D video sequences of different motions reconstructed from multi‐view video. Precomputed 4D model flow allows the efficient parametrization of surface appearance from the captured videos, which enables efficient real‐time rendering of interpolated 4D video sequences whilst accurately reproducing visual dynamics, even when using a coarse underlying geometry. We estimate the 4D model flow using an image‐based approach that is guided by available geometry proxies. We propose a novel representation in surface texture space for efficient storage and online parametric interpolation of dynamic appearance. Our 4D model flow overcomes previous requirements for computationally expensive online optical flow computation for data‐driven alignment of dynamic surface appearance by precomputing the appearance alignment. This leads to an efficient rendering technique that enables the online interpolation between 4D videos in real time, from arbitrary viewpoints and with visual quality comparable to the state of the art.  相似文献   

3.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

4.
CEM算法使用六张2D纹理图来构建虚拟场景中的背景环境.通常其生成的背景环境是固定不变的,不能满足某些需要背景变换场景的需求.针对这个问题,提出一个动态立方体纹理生成算法,采用映射变换的方法动态的生成立方体纹理图,实验结果表明该算法可以构建实现实时、动态的背景场景.  相似文献   

5.
In recent years, the convergence of computer vision and computer graphics has put forth a new field of research that focuses on the reconstruction of real-world scenes from video streams. To make immersive 3D video reality, the whole pipeline spanning from scene acquisition over 3D video reconstruction to real-time rendering needs to be researched. In this paper, we describe latest advancements of our system to record, reconstruct and render free-viewpoint videos of human actors. We apply a silhouette-based non-intrusive motion capture algorithm making use of a 3D human body model to estimate the actor’s parameters of motion from multi-view video streams. A renderer plays back the acquired motion sequence in real-time from any arbitrary perspective. Photo-realistic physical appearance of the moving actor is obtained by generating time-varying multi-view textures from video. This work shows how the motion capture sub-system can be enhanced by incorporating texture information from the input video streams into the tracking process. 3D motion fields are reconstructed from optical flow that are used in combination with silhouette matching to estimate pose parameters. We demonstrate that a high visual quality can be achieved with the proposed approach and validate the enhancements caused by the the motion field step.  相似文献   

6.
鲁棒的镜头边界检测与基于运动信息的视频摘要生成   总被引:1,自引:0,他引:1  
根据基于内容的视频索引与检索等应用的需求,提出一种视频摘要生成方法.首先进行鲁棒的镜头边界检测,基于颜色直方图计算相邻帧间距离来进行初步检测,并通过分析帧间运动向量去除由相机运动引起的误检测;然后根据镜头的运动指示图将镜头分为静态镜头、包含对象运动的镜头和包含显著相机运动的镜头;最后提出镜头间基于多实例表示的距离度量方法以及聚类算法的初始化方法,采用核K-均值算法对每类镜头进行聚类,抽取每类中最靠近类簇中心的镜头作为关键镜头,将关键镜头按时间序组合起来形成视频摘要.与已有方法相比,文中方法能进行更鲁棒的镜头边界检测,识别镜头中的运动信息,并对镜头分类后进行分别处理,从而增强视频摘要的信息概括能力.  相似文献   

7.
We present a technique for coupling simulated fluid phenomena that interact with real dynamic scenes captured as a binocular video sequence. We first process the binocular video sequence to obtain a complete 3D reconstruction of the scene, including velocity information. We use stereo for the visible parts of 3D geometry and surface completion to fill the missing regions. We then perform fluid simulation within a 3D domain that contains the object, enabling one‐way coupling from the video to the fluid. In order to maintain temporal consistency of the reconstructed scene and the animated fluid across frames, we develop a geometry tracking algorithm that combines optic flow and depth information with a novel technique for “velocity completion”. The velocity completion technique uses local rigidity constraints to hypothesize a motion field for the entire 3D shape, which is then used to propagate and filter the reconstructed shape over time. This approach not only generates smoothly varying geometry across time, but also simultaneously provides the necessary boundary conditions for one‐way coupling between the dynamic geometry and the simulated fluid. Finally, we employ a GPU based scheme for rendering the synthetic fluid in the real video, taking refraction and scene texture into account.  相似文献   

8.
动态纹理是计算机视觉中的动态模型之一,在空间范围内具有统计平稳性,在时间维度上具有随机重复性。动态纹理合成的目标是生成与给定纹理在视觉上相似的图像。在进行动态纹理合成时,回归预测误差积累是导致纹理质量下降的一个关键问题。为此,本文提出一种基于自纠正机制的动态纹理合成模型。利用清晰度、结构相似性、光流等指标来确定优化数据范围,并找到优化极值点。通过自纠正机制,将原始数据替换为优化数据,并将优化数据用于回归预测。最后,利用卷积自编码器将预测数据重构为高维的动态纹理视频帧。在DynTex数据库上进行实验,并与几种典型的动态纹理合成模型进行比较。实验结果表明,用该模型合成的动态纹理视频帧与真实视频帧计算得到的MSE(Mean Square Error)数值更小,PSNR(Peak Signal to Noise Ratio)和SSIM(Structural SIMilarity)的数值更大。它解决了动态纹理合成中出现的残影、模糊、噪声等问题,从而能够生成视觉效果更好并且更长的动态纹理。同时,验证了所提出的建模方法的有效性。  相似文献   

9.
We present a generative model and inference algorithm for 3D nonrigid object tracking. The model, which we call G-flow, enables the joint inference of 3D position, orientation, and nonrigid deformations, as well as object texture and background texture. Optimal inference under G-flow reduces to a conditionally Gaussian stochastic filtering problem. The optimal solution to this problem reveals a new space of computer vision algorithms, of which classic approaches such as optic flow and template matching are special cases that are optimal only under special circumstances. We evaluate G-flow on the problem of tracking facial expressions and head motion in 3D from single-camera video. Previously, the lack of realistic video data with ground truth nonrigid position information has hampered the rigorous evaluation of nonrigid tracking. We introduce a practical method of obtaining such ground truth data and present a new face video data set that was created using this technique. Results on this data set show that G-flow is much more robust and accurate than current deterministic optic-flow-based approaches.  相似文献   

10.
用于动态序列合成的运动纹理模型   总被引:1,自引:0,他引:1  
提出了一种新的用于动态序列合成的分层统计模型——运动纹理模型.该模型使用统计方法自动分析动态序列,可以合成与原始样本数据统计特性相同的新的动态序列.运动纹理模型是一个两层模型.其中包含的两层分别为基元层和基元分布层.模型中用线性动态系统来表示单个基元.而对基元的统计分布则由相关转移矩阵来描述.该文详细地讨论了如何通过最大似然准则来学习运动纹理模型的方法,并描述了如何用运动纹理模型自动合成动态序列的过程.该文通过使用运动纹理模型合成舞蹈动作和视频序列的一些实验验证了该模型的有效性.  相似文献   

11.
3D video billboard clouds reconstruct and represent a dynamic three-dimensional scene using displacement-mapped billboards. They consist of geometric proxy planes augmented with detailed displacement maps and combine the generality of geometry-based 3D video with the regularization properties of image-based 3D video. 3D video billboards are an image-based representation placed in the disparity space of the acquisition cameras and thus provide a regular sampling of the scene with a uniform error model. We propose a general geometry filtering framework which generates time-coherent models and removes reconstruction and quantization noise as well as calibration errors. This replaces the complex and time-consuming sub-pixel matching process in stereo reconstruction with a bilateral filter. Rendering is performed using a GPU-accelerated algorithm which generates consistent view-dependent geometry and textures for each individual frame. In addition, we present a semi-automatic approach for modeling dynamic three-dimensional scenes with a set of multiple 3D video billboards clouds.  相似文献   

12.
基于粒子和纹理绘制的火焰合成   总被引:16,自引:0,他引:16  
现有的火焰合成算法多数都是基于粒子系统,其主要缺点是运算量大。本文先用少量粒子勾勒火焰的外轮廓线,再用纹理绘制的方法填充火焰纹理。这样既利用了粒子系统形成轮廓线的真实感,又避免大量的粒子状态运算,并能体现动态火焰纹理的一致性和连续性。  相似文献   

13.
We present an algorithm based on statistical learning for synthesizing static and time-varying textures matching the appearance of an input texture. Our algorithm is general and automatic and it works well on various types of textures, including 1D sound textures, 2D texture images, and 3D texture movies. The same method is also used to generate 2D texture mixtures that simultaneously capture the appearance of a number of different input textures. In our approach, input textures are treated as sample signals generated by a stochastic process. We first construct a tree representing a hierarchical multiscale transform of the signal using wavelets. From this tree, new random trees are generated by learning and sampling the conditional probabilities of the paths in the original tree. Transformation of these random trees back into signals results in new random textures. In the case of 2D texture synthesis, our algorithm produces results that are generally as good as or better than those produced by previously described methods in this field. For texture mixtures, our results are better and more general than those produced by earlier methods. For texture movies, we present the first algorithm that is able to automatically generate movie clips of dynamic phenomena such as waterfalls, fire flames, a school of jellyfish, a crowd of people, etc. Our results indicate that the proposed technique is effective and robust  相似文献   

14.
基于图像的室内虚拟环境的研究   总被引:8,自引:0,他引:8  
基于图像的建模和绘制技术,提出了一个构造室内虚拟环境的完整方案,用户只需要输入少数照片,即可重建室内场景的全景图像,方案主要包括以下几点:首先由用户交互确定图像中的匹配象素,通过运动分析算法恢复整个场景的几何结构,然后,将原始图像变换至平面的参数坐标系,抽取纹理图像,并在参数空间对纹理图像进行拼接;最后生成场景的全景图像,算法对拍摄条件和设备没有苛刻要求,运算量较小,有较强的稳定性。  相似文献   

15.
Road traffic density has always been a concern in large cities around the world, and many approaches were developed to assist in solving congestions related to slow traffic flow. This work proposes a congestion rate estimation approach that relies on real-time video scenes of road traffic, and was implemented and evaluated on eight different hotspots covering 33 different urban roads. The approach relies on road scene morphology for estimation of vehicles average speed along with measuring the overall video scenes randomness acting as a frame texture analysis indicator. Experimental results shows the feasibility of the proposed approach in reliably estimating traffic density and in providing an early warning to drivers on road conditions, thereby mitigating the negative effect of slow traffic flow on their daily lives.  相似文献   

16.
Texture histograms as a function of irradiation and viewing direction   总被引:1,自引:1,他引:0  
The textural appearance of materials encountered in our daily environment depends on two directions, the irradiation and viewing direction. We investigate the bidirectional grey level histograms of a large set of materials, obtained from a texture database. We distinguish important categories, relate the various effects to physical mechanisms, and list material attributes that influence the bidirectional histograms. We use a model for rough surfaces with locally diffuse and/or specular reflection properties, a class of materials that commonly occurs, to generate bidirectional histograms and obtain close agreement with experimental data. We discuss several applications of bidirectional texture functions and histograms. In particular, we present a new approach to texture mapping based on bidirectional histograms. For 3D texture, this technique is superior to standard 2D texture mapping at hardly any extra computational cost or memory requirements.  相似文献   

17.
树皮的真实感表达是树木可视化中的一个重要问题。树皮表面纹理具有丰富的细节,并且树干不同部位的纹理质地可能会发生渐变,要真实模拟树皮纹理的这些效果并不简单。提出一种真实感树皮纹理的合成方法。该方法基于块纹理合成的思想,可以根据几块树皮样图纹理,实现树干部分整张树皮纹理的合成。采用这种合成方法,有效避免了采用一般纹理拼接方法所造成的纹理接缝问题。并且,该方法采用一种控制合成概率的策略有效实现了不同质地的树皮纹理之间的渐变,从而能够真实表现出树皮由老到嫩逐渐变化的效果。基于样本及边缘融合的方式实现了树皮疤痕效果的生成。实验表明,该方法可以有效生成带有生长变化特征的真实感树皮纹理,满足真实感绘制的需求。  相似文献   

18.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

19.
三维人脸模型已经广泛应用到视频电话、视频会议、影视制作、电脑游戏、人脸识别等多个领域。目前三维人脸建模一般使用多幅图像,且要求表情中性。本文提出了基于正、侧面任意表情三维人脸重建方法。首先对二维图像中的人脸进行特征提取,然后基于三维人脸统计模型,通过缩放、平移、旋转等方法,及全局和局部匹配,获得特定的三维人脸。基于二维图像中的人脸纹理信息,通过纹理映射,获得完整的三维人脸。通过对大量实际二维人脸图像的三维人脸重建,证实了该方法的有效性和鲁棒性。  相似文献   

20.
基于粒子系统的实时雨模拟   总被引:10,自引:0,他引:10       下载免费PDF全文
李苏军  吴玲达 《计算机工程》2007,33(18):236-238
基于流体动力学和粒子系统理论,给出了一种实时生成三维雨的方法。算法以矩形基本粒子对雨粒子进行造型,采用动态纹理映射技术和透明度扰动方法,根据雨滴的降落运动方程,来描述不同大小雨粒子受到重力和空气浮力、阻力影响时的运动效果,采用与视点相关技术动态生成三维降雨场景。与传统的雨模拟算法相比,该算法既正确模拟了雨的运动行为,又降低了计算复杂性,真实再现了雨的三维视觉效果,在满足实时交互漫游的前提下表现出较强的真实感,具有一定的实用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号