首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
为了减少所需采集的视频数据量, 基于图像绘制(Image-based rendering, IBR) 的前沿方法将稠密视点信息映射成压缩感知框架中的原始信号, 并将稀疏视点图像作为随机测量值, 但低维测量信号由所有稠密视点信息线性组合而成, 而稀疏视点图像仅仅来源于部分视点信息, 导致稀疏视点采集的图像与低维测量信号不一致. 本文提出利用间隔采样矩阵消除测量信号与稀疏视点图像位置之间的差异, 进而通过约束由测量矩阵和基函数构成的传感矩阵尽量满足有限等距性, 使得能够获得原始信号的唯一精确解. 仿真实验结果表明, 相比于前沿方法, 本文提出的方法对于不同复杂程度的场景重建都提高了主客观质量.  相似文献   

2.
测量数据,即图像和激光扫描数据,是建模真实植物的可靠依据.然而,由于植物结构复杂、自遮挡严重,相机或激光扫描仪只能感知部分可靠数据且扫描数据中噪声较多,导致从该类数据恢复植物模型极为困难.文中提出通过检测尖点特征,如带叶片植物的叶尖以及纯枝干植物的枝稍,重建植物模型的方法.该方法能够从多视点图像中便捷重建带叶片植物模型,从含噪声激光数据中重建不带叶片的纯枝干模型.实验证实了所提特征检测算法的正确性以及植物重建算法的有效性.  相似文献   

3.
结合法向聚类的大叶片植物重建   总被引:1,自引:0,他引:1  
根据大叶片植物自身的特性,提出一种从3D点云中重建大叶片植物的方法,主要包括点云聚类和叶片重建两部分.首先根据叶片3D点之间的距离对叶片的3D点云进行初始聚类;然后依据叶子大而平、不同叶片法线方向相差较大的特性,通过计算点的法线将3D点云细分为多个聚类,每个聚类表示一片叶子;最后利用一个通用叶子模型将每个聚类拟合成叶子.实验结果表明,该方法可以重建真实感很强的大叶片植物.  相似文献   

4.
基于DIBR和图像修复的任意视点绘制   总被引:1,自引:1,他引:0       下载免费PDF全文
基于深度的图像绘制(DIBR)是高级视频应用的关键技术,为提高视点变换的图像质量,提出一种基于DIBR和图像修复的任意视点绘制方法。首先对深度图像进行形态学处理,以减少视点变换产生的空洞,平滑目标视点内部的物体轮廓;利用视点变换方程生成目标视点;对含有空洞的目标视点采用图像修复算法进行后处理,设计了含有深度项的代价函数,在深度的约束下进行纹理搜索,将最佳匹配块填补到空洞;在修复空洞的过程中采用亮度优先的策略以适应不同的色度采样格式。实验的主观效果对比和PSNR数据都显示本文算法比其他算法更为优越。  相似文献   

5.
基于DIBR和图像融合的任意视点绘制   总被引:1,自引:1,他引:1       下载免费PDF全文
虚拟视点生成是3维视频会议等应用领域中的关键技术,为了快速高质量地进行任意视点绘制,提出了一种基于深度图像绘制(DIBR)和图像融合的新视点生成方法,该方法首先对参考图像进行预处理,包括深度图像的边缘滤波和参考图像规正,以减少目标图像中产生的较大空洞和虚假边缘;然后利用3维图像变换生成新视点图像,并用遮挡兼容算法对遮挡进行快速处理;接着再对两幅目标图像进行融合得到新视点图像;最后用插值法填充剩余的较小空洞。实验证明,该新方法能获得令人满意的绘制效果。  相似文献   

6.
针对虚拟场景中大规模人群图像绘制时由于采集图像数量有限,在视点连续变化时出现跳变的问题,提出了一种基于Motion Vector的图像绘制算法,该算法通过移动原始图像的像素位置生成相邻视点位置的图像,使得各个离散采样之间能够连续过渡,很好地解决了由于采样不足而引起的跳变问题,同时保证了绘制系统的实时性,实验结果验证了该算法的有效性。  相似文献   

7.
盛斌  吴恩华 《软件学报》2008,19(7):1806-1816
首先推导与归纳了图像三维变换中像素深度场的变换规律,同时提出了基于深度场和极线原则的像素可见性别方法,根据上述理论和方法,提出一种基于深度图像的建模与绘制(image-based modeling and rendering,简称IBMR)技术,称为虚平面映射.该技术可以基于图像空间内任意视点对场景进行绘制.绘制时,先在场景中根据视线建立若干虚拟平面,将源深度图像中的像素转换到虚平面上,然后通过对虚平面上像素的中间变换,将虚平面转换成平面纹理,再利用虚平面的相互拼接,将视点的成像以平面纹理映射的方式完成.新方法还能在深度图像内侧,基于当前视点快速获得该视点的全景图,从而实现视点的实时漫游.新方法视点运动空间大、存储需求小,且可以发挥图形硬件的纹理映射功能,并能表现物体表面的三维凹凸细节和成像视差效果,克服了此前类似算法的局限和不足.  相似文献   

8.
从图像重建高质量三维人脸一直是计算机视觉和图形学的一个重要研究问题.不同于传统的基于立体匹配的窄基线多视几何和数据驱动的人脸形变方法,提出一种结合网格变形技术和立体视觉原理的、从图像重建高质量三维人脸模型方法.给定从不同视角拍摄的几幅人脸图像,基于健壮图像特征获得可靠的相机外部参数和稀疏三维点;在此基础上,提出一种结合几何细节保持和图像一致性约束的三维人脸变形算法重建三维人脸,通过对人脸模板的网格变形,使得变形人脸在多幅图像中的可见投影具有一致性的图像颜色强度.基于模板的人脸变形可以有效地解决三维模型成像中的遮挡问题,采用健壮估计法消除噪声、离群点和光照对目标函数收敛性的影响,对目标函数的多次非线性优化求解进一步改进了人脸重建的质量.采用合成人脸图像和真实人脸图像重建三维人脸的实验结果表明,文中算法可以从几幅宽基线图像重建高质量的三维人脸模型.  相似文献   

9.
利用一种基于法线的模型变形方法,从单张图像重建高质量的三维人脸.利用球谐函数和一个初始参考模型计算得到模型上每个顶点的法线,利用法线使参考模型变形.实验结果表明:提出的算法可以从单幅图像重建具有细节的高质量三维人脸.  相似文献   

10.
提出了一种GPU加速的实时基于图像的绘制算法.该算法利用极坐标系生成对物体全方位均匀采样的球面深度图像;然后根据推导的两个预变换公式将单幅球面深度图像预变换到物体包围球的一个与视点相关的切平面上,以生成中间图像;再利用纹理映射生成最终目标图像.利用现代图形硬件的可编程性和并行性,将预变换移植到Vertex Shader来加快绘制速度;利用硬件的光栅化功能来完成图像的插值,以得到连续无洞的结果图像.此外,还在Pixel Shader上进行逐像素的光照以及环境映射的计算,生成高质量的光照效果.最终,文章解决了算法的视点受限问题,并设计了一种动态LOD(Level of Details)算法,实现了一个实时漫游系统,保持了物体间正确的遮挡关系.  相似文献   

11.
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette-defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.  相似文献   

12.
3D video billboard clouds reconstruct and represent a dynamic three-dimensional scene using displacement-mapped billboards. They consist of geometric proxy planes augmented with detailed displacement maps and combine the generality of geometry-based 3D video with the regularization properties of image-based 3D video. 3D video billboards are an image-based representation placed in the disparity space of the acquisition cameras and thus provide a regular sampling of the scene with a uniform error model. We propose a general geometry filtering framework which generates time-coherent models and removes reconstruction and quantization noise as well as calibration errors. This replaces the complex and time-consuming sub-pixel matching process in stereo reconstruction with a bilateral filter. Rendering is performed using a GPU-accelerated algorithm which generates consistent view-dependent geometry and textures for each individual frame. In addition, we present a semi-automatic approach for modeling dynamic three-dimensional scenes with a set of multiple 3D video billboards clouds.  相似文献   

13.
Sensor data, typically images and laser data, are essential to modeling real plants. However, due to the complex geometry of the plants, the measurement data are generally limited, thereby bringing great difficulties in classifying and constructing plant organs, comprising leaves and branches. The paper presents an approach to modeling plants with the sensor data by detecting reliable sharp features, i.e. the leaf apexes of the plants with leaves and the branch tips of the plants without leaves, on volumes recovered from the raw data. The extracted features provide good estimations of correct positions of the organs. Thereafter, the leaves are reconstructed separately by simply fitting and optimizing a generic leaf model. One advantage of the method is that it involves limited manual intervention. For plants without leaves, we develop an efficient strategy for decomposition-based skeletonization by using the tip features to reconstruct the 3D models from noisy laser data. Experiments show that the sharp feature detection algorithm is effective, and the proposed plant modeling approach is competent in constructing realistic models with sensor data. Supported in part by the National Basic Research Program of China (Grant No. 2004CB318000), the National High-Tech Research & Development Program of China (Grant Nos. 2006AA01Z301, 2006AA01Z302, 2007AA01Z336), Key Grant Project of Chinese Ministry of Education (Grant No. 103001)  相似文献   

14.
基于多幅图像的场景交互建模系统   总被引:7,自引:0,他引:7  
以普通相机自由运动拍摄得到的多幅非广角照片作为输入,提供给用户一系列简单易用的交互工具,从而恢复出真实场景的几何模型和表面纹理,最终得到具有高度真实感的三维模型.在技术路线上,文中系统采用按照几何空间层次分级重建的方法,并充分利用场景中存在的几何约束信息对重建结果进行优化.实验结果表明,利用该系统生成的三维模型准确真实,能够满足虚拟现实和可视化等应用的要求.  相似文献   

15.
This paper introduces the novel volumetric methodology “appearance-cloning” as a viable solution for achieving a more improved photo-consistent scene recovery, including a greatly enhanced geometric recovery performance, from a set of photographs taken at arbitrarily distributed multiple camera viewpoints. We do so while solving many of the problems associated with previous stereo-based and volumetric methodologies. We redesign the photo-consistency decision problem of individual voxel in volumetric space as the photo-consistent shape search problem in image space, by generalizing the concept of the point correspondence search between two images in stereo-based approach, within a volumetric framework. In detail, we introduce a self-constrained greedy-style optimization methodology, which iteratively searches a more photo-consistent shape based on the probabilistic shape photo-consistency measure, by using the probabilistic competition between candidate shapes. Our new measure is designed to bring back the probabilistic photo-consistency of a shape by comparing the appearances captured from multiple cameras with those rendered from that shape using the per-pixel Maxwell model in image space. Through various scene recoveries experiments including specular and dynamic scenes, we demonstrate that if sufficient appearances are given enough to reflect scene characteristics, our appearance-cloning approach can successfully recover both the geometry and photometry information of a scene without any kind of scene-dependent algorithm tuning.  相似文献   

16.
Visual Modeling with a Hand-Held Camera   总被引:10,自引:0,他引:10  
In this paper a complete system to build visual models from camera images is presented. The system can deal with uncalibrated image sequences acquired with a hand-held camera. Based on tracked or matched features the relations between multiple views are computed. From this both the structure of the scene and the motion of the camera are retrieved. The ambiguity on the reconstruction is restricted from projective to metric through self-calibration. A flexible multi-view stereo matching scheme is used to obtain a dense estimation of the surface geometry. From the computed data different types of visual models are constructed. Besides the traditional geometry- and image-based approaches, a combined approach with view-dependent geometry and texture is presented. As an application fusion of real and virtual scenes is also shown.  相似文献   

17.
Image-Based Modeling by Joint Segmentation   总被引:1,自引:0,他引:1  
The paper first traces the image-based modeling back to feature tracking and factorization that have been developed in the group led by Kanade since the eighties. Both feature tracking and factorization have inspired and motivated many important algorithms in structure from motion, 3D reconstruction and modeling. We then revisit the recent quasi-dense approach to structure from motion. The key advantage of the quasi-dense approach is that it not only delivers the structure from motion in a robust manner for practical modeling purposes, but also it provides a cloud of sufficiently dense 3D points that allows the objects to be explicitly modeled. To structure the available 3D points and registered 2D image information, we argue that a joint segmentation of both 3D and 2D is the fundamental stage for the subsequent modeling. We finally propose a probabilistic framework for the joint segmentation. The optimal solution to such a joint segmentation is still generally intractable, but approximate solutions are developed in this paper. These methods are implemented and validated on real data set.  相似文献   

18.
In this paper we present a hybrid algorithm for building the bounding volume hierarchy (BVH) that is used in accelerating ray tracing of animated models. This algorithm precomputes densely packed clusters of triangles on surfaces. Folowing that, a set of clusters is used to rebuild the BVH in every frame. Our approach utilizes the assumption that groups of connected triangles remain connected throughout the course of the animation. We introduce a novel heuristic to create triangle clusters that are designed for high performance ray tracing. This heuristic combines the density of connectivity, geometric size and the shape of the cluster.
Our approach accelerates the BVH builder by an order of magnitude rebuilding only the set of clusters that is much smaller than the original set of triangles. The speed-up is achieved against a 'brute-force' BVH builder that repartitions all triangles in every frame of animation without using any pre-clustering. The rendering performance is not affected when a cluster contains a few dozen triangles. We demonstrate the real-time/interactive ray tracing performance for highly-dynamic complex models.  相似文献   

19.
We present an algorithm called Procrustes-Lo-RANSAC (PLR) to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture 3D point cloud models of an object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is then able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occlusion-aware, meaning that the robot has knowledge of parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints.  相似文献   

20.
The manufacturing of a mechanical part is a dynamic evolution process from a raw workpiece to the final part, in which the generation of serial 3D models reflecting the changes on geometric shapes is especially critical to digital manufacturing. In this paper, an approach driven by the process planning course, the machining semantics and the machining geometry to reconstruct incrementally the serial 3D models for rotational part’s dynamic evolution is proposed. The two major techniques involved are: (1) extraction of machining semantics based on process planning language understanding; (2) 3D reconstruction from 2D procedure working drawings guided by machining semantics and visualization for the reconstructed series of 3D models. Compared with the conventional 3D reconstruction methods, this approach introduced the process planning course and relevant information to implement a dynamic, incremental and knowledge-based reconstruction which can greatly reduce the efforts in reconstruction and extend the collection of geometric shapes to be reconstructed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号