首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
增强现实技术可以把计算机产生的虚拟物体或其它信息合成到用户看到的真实场景中.将增强现实技术与基于图像的绘制技术有机的结合起来,介绍了虚拟实景空间的构造,探讨了虚拟物体与实景空间的合成方法,给出了一种在虚拟实景空间中漫游的机制及其实现方法,综合考虑了机制的复杂度和漫游的自由度,在两者之间取得了平衡,该方法具有较好的实用性.  相似文献   

2.
基于增强现实的虚拟实景空间的研究与实现   总被引:1,自引:0,他引:1  
增强现实可以把计算机产生的虚拟物体或其他信息合成到用户看到的真实场景中.本文将增强现实技术与基于图像的绘制技术有机的结合起来,研究并实现了一个基于增强现实技术的虚拟实景空间系统.首先介绍了虚拟实景空间的构造,然后着重探讨了虚拟物体与实景空间的合成方法,主要解决合成中的几何一致性和光照一致性问题。  相似文献   

3.
实景空间中虚拟对象嵌入技术研究   总被引:2,自引:0,他引:2  
实景空间是以实景图像为素材构造出的具有三维操纵能力的虚拟空间,增强现实技术可以把计算机产生的虚拟物体或其他信息合成到用户看到的真实场景中。该文主要讨论如何在实景空间中嵌入计算机生成的虚拟对象。首先介绍了实景空间的概念和模型,然后着重探讨了虚拟对象与实景空间的合成方法,主要解决合成中的几何一致性和光照一致性问题。  相似文献   

4.
一种基于双幅图像的物体3维重建   总被引:1,自引:0,他引:1       下载免费PDF全文
随着虚拟现实技术的发展,对物体进行快速有效建模逐渐成为了研究热点。基于图像建模技术是一种有效的建模方法,其最大的优点是可以利用图像中的信息直接建立物体的几何模型,并且可以迅速构建出具有“照片级”真实感的3维模型。常规的方法是先求解摄像机内外参数,再求解空间中各点在世界坐标系下的位置。提出了一种基于两幅图像利用灭点性质进行建模的新方法。该方法首先利用灭点属性推导出了相机中心在世界坐标系下的坐标位置的解析表达式,再利用两幅图像之间的空间几何约束关系,从空间约束关系中反算出物体各点的空间位置从而重建出物体几何模型,最后进行纹理映射。在物体重建的过程中只利用到了相机内参数,而跳过了对相机外参数的标定,实验结果表明,该方法简单可靠,可用于实现对拥有规则表面的物体3维重建。  相似文献   

5.
基于图像的对象环绕视图生成方法研究   总被引:1,自引:0,他引:1  
基于图像的环绕视图生成应用IBMR替代传统的三维对象建模的方法,对虚拟场景中的对象进行建模和表现。该文探讨了基于图像的建模与渲染在环境表现和对象表现上的区别,提出了环绕视域空间划分的对象采样方法,通过建立相邻样本之间的特征点匹配关系,计算样本之间的相对关系,进行视图的平行变换和合成。  相似文献   

6.
基于光场的渲染技术研究   总被引:1,自引:0,他引:1  
基于图像的渲染技术(Image-Based Rendering,IBR)逐渐成为场景渲染的主要手段之一,目前已有大量的基于此技术的方法提出,其中光场渲染(Light Field Renderin,LFR)是在不需要图像的深度信息或相关性的条件下,通过相机阵列或由一个相机按设计好的路径移动把场景拍摄下来作为输入图像集,对于任意给定的新视点,找出该视点邻近的几个采样点进行简单的重新采样,就能得到该视点处的视图.本文采用两平面参数化的方法设计实施了一套光场渲染的软件方案,用光场渲染的方法实现了真实场景中物体的实时漫游.  相似文献   

7.
阐述了虚实结合构建虚拟战场场景的基本思想,将虚景和实景相结合,对于需要操纵和改变外观轮廓形状的目标视景采用虚景即基于几何模型的方法建模,对于环境视景采用实景即基于图像(IBR)的方法建模,并对这两种模型进行图像合成,实现的场景既具有很好的真实感,又具有很好的实时性和可操纵性。实现了一个图像和几何模型相结合的虚拟战场视景系统,介绍了系统的结构、目标视景三维模型生成、环境视景的生成及虚拟场景图像生成与显示等关键技术。  相似文献   

8.
通过基于单幅图像的建模方法或者基于立体像对的建模方法,可以恢复图像序列中每一幅图像或者立体图像对对应的模型.但所得的是局部模型,并非物体的完整模型.针对这一问题,提出了一种基于图像建模的多视图模型合并策略.根据建筑物模型的规整性,在从单幅图像恢复出建筑物模型的基础上,给出了一种简单灵活、易于实现的针对建筑物的多视图合并方法,从而得到完整的场景模型.该方法大致分为建立视图公共坐标系链表、坐标转换、顶点合并、模型合拢等几部分.从整个过程来看,该方法系统、完善,实验结果显示每一步的操作都是可行的,并且是有效的.  相似文献   

9.
单幅图像深度估计是计算机视觉中的经典问题,对场景的3维重建、增强现实中的遮挡及光照处理具有重要意义。本文回顾了单幅图像深度估计技术的相关工作,介绍了单幅图像深度估计常用的数据集及模型方法。根据场景类型的不同,数据集可分为室内数据集、室外数据集与虚拟场景数据集。按照数学模型的不同,单目深度估计方法可分为基于传统机器学习的方法与基于深度学习的方法。基于传统机器学习的单目深度估计方法一般使用马尔可夫随机场(MRF)或条件随机场(CRF)对深度关系进行建模,在最大后验概率框架下,通过能量函数最小化求解深度。依据模型是否包含参数,该方法又可进一步分为参数学习方法与非参数学习方法,前者假定模型包含未知参数,训练过程即是对未知参数进行求解;后者使用现有的数据集进行相似性检索推测深度,不需要通过学习来获得参数。对于基于深度学习的单目深度估计方法本文详细阐述了国内外研究现状及优缺点,同时依据不同的分类标准,自底向上逐层级将其归类。第1层级为仅预测深度的单任务方法与同时预测深度及语义等信息的多任务方法。图片的深度和语义等信息关联密切,因此有部分工作研究多任务的联合预测方法。第2层级为绝对深度预测方法与相对深度关系预测方法。绝对深度是指场景中的物体到摄像机的实际距离,而相对深度关注图片中物体的相对远近关系。给定任意图片,人的视觉更擅于判断场景中物体的相对远近关系。第3层级包含有监督回归方法、有监督分类方法及无监督方法。对于单张图片深度估计任务,大部分工作都关注绝对深度的预测,而早期的大多数方法采用有监督回归模型,即模型训练数据带有标签,且对连续的深度值进行回归拟合。考虑到场景由远及近的特性,也有用分类的思想解决深度估计问题的方法。有监督学习方法要求每幅RGB图像都有其对应的深度标签,而深度标签的采集通常需要深度相机或激光雷达,前者范围受限,后者成本昂贵。而且采集的原始深度标签通常是一些稀疏的点,不能与原图很好地匹配。因此不用深度标签的无监督估计方法是研究趋势,其基本思路是利用左右视图,结合对极几何与自动编码机的思想求解深度。  相似文献   

10.
对传统增强现实系统中虚拟物体与真实物体难以进行碰撞交互的问题,提出一种对深度图像中的场景进行分割,并基于分割结果构建代理几何体的方法来实现虚、实物体的碰撞交互。采用Kinect等深度获取设备获取当前真实场景的彩色图像信息和深度图像信息;通过深度图像的法向聚类及平面拟合技术来识别出场景中的主平面区域;对除去主平面区域的其他聚类点云区域进行融合处理,得到场景中的其他主要物体区域;为识别出的主平面构建虚拟平面作为该平面的代理几何体,为分割出的物体构建包围盒来作为其代理几何体。将这些代理几何体叠加到真实物体上,并对之赋予物理属性,即可模拟实现虚拟物体与真实物体的碰撞交互。实验结果表明,该方法可有效分割简单场景,从而实现虚实交互。  相似文献   

11.
This paper presents a 2D to 3D conversion scheme to generate a 3D human model using a single depth image with several color images. In building a complete 3D model, no prior knowledge such as a pre-computed scene structure and photometric and geometric calibrations is required since the depth camera can directly acquire the calibrated geometric and color information in real time. The proposed method deals with a self-occlusion problem which often occurs in images captured by a monocular camera. When an image is obtained from a fixed view, it may not have data for a certain part of an object due to occlusion. The proposed method consists of following steps to resolve this problem. First, the noise in a depth image is reduced by using a series of image processing techniques. Second, a 3D mesh surface is constructed using the proposed depth image-based modeling method. Third, the occlusion problem is resolved by removing the unwanted triangles in the occlusion region and filling the corresponding hole. Finally, textures are extracted and mapped to the 3D surface of the model to provide photo-realistic appearance. Comparison results with the related work demonstrate the efficiency of our method in terms of visual quality and computation time. It can be utilized in creating 3D human models in many 3D applications.  相似文献   

12.
提出了一种基于图像绘制的多边形柱面全景图的虚拟漫游方法。利用普通 的手持相机在一个多边形区域内沿某一路径拍摄并拼接多幅全景图,通过基于SIFT 的特征 点检测来计算深度,用狭缝图像插值来实现整个区域内的平滑漫游。该方法具有采样简单、 虚拟场景真实感强,支持连续大范围漫游的特点。  相似文献   

13.
In the field of augmented reality (AR), many kinds of vision-based extrinsic camera parameter estimation methods have been proposed to achieve geometric registration between real and virtual worlds. Previously, a feature landmark-based camera parameter estimation method was proposed. This is an effective method for implementing outdoor AR applications because a feature landmark database can be automatically constructed using the structure-from-motion (SfM) technique. However, the previous method cannot work in real time because it entails a high computational cost or matching landmarks in a database with image features in an input image. In addition, the accuracy of estimated camera parameters is insufficient for applications that need to overlay CG objects at a position close to the user's viewpoint. This is because it is difficult to compensate for visual pattern change of close landmarks when only the sparse depth information obtained by the SfM is available. In this paper, we achieve fast and accurate feature landmark-based camera parameter estimation by adopting the following approaches. First, the number of matching candidates is reduced to achieve fast camera parameter estimation by tentative camera parameter estimation and by assigning priorities to landmarks. Second, image templates of landmarks are adequately compensated for by considering the local 3-D structure of a landmark using the dense depth information obtained by a laser range sensor. To demonstrate the effectiveness of the proposed method, we developed some AR applications using the proposed method.  相似文献   

14.
The great flexibility of a view camera allows the acquisition of high quality images that would not be possible any other way. Bringing a given object into focus is however a long and tedious task, although the underlying optical laws are known. A fundamental parameter is the aperture of the lens entrance pupil because it directly affects the depth of field. The smaller the aperture, the larger the depth of field. However a too small aperture destroys the sharpness of the image because of diffraction on the pupil edges. Hence, the desired optimal configuration of the camera is such that the object is in focus with the greatest possible lens aperture. In this paper, we show that when the object is a convex polyhedron, an elegant solution to this problem can be found. It takes the form of a constrained optimization problem, for which theoretical and numerical results are given. The optimization algorithm has been implemented on the prototype of a robotised view camera.  相似文献   

15.
在虚拟现实应用中,传统的建模方法是用图形建模的方法对每一个物体进行建模。在场景中,往往有很多物体是非常复杂甚至难以建模的。为了使建模工作简化并同时保证逼真度,提出一种基于图像表达复杂物体的方法。可以对某个复杂物体拍摄或制作多张对应不同视点方向的图像,并且在程序中用图的数据结构管理。在漫游浏览时,再根据摄像机的当前位置确定显示对应视点方向的图像。这种方法既可保证场景的逼真度,同时又使场景得到简化,使系统的运行速度得到提高。  相似文献   

16.
CCD camera modeling and simulation   总被引:2,自引:0,他引:2  
In this paper we propose a modeling of an acquisition line made up of a CCD camera, a lens and a frame grabber card. The purpose of this modeling is to simulate the acquisition process in order to obtain images of virtual objects. The response time has to be short enough to permit interactive simulation. All the stages are modelised: in the first phase, we present a geometric model which supplies a point to point transformation that provides, for a space point in the camera field, the corresponding point on the plane of the CCD sensor. The second phase consists of modeling the discrete space which implies passing from the continous known object view to a discrete image, in accordance with the different orgin of the contrast loss. In the third phase, the video signal is reconstituted in order to be sampled by the frame grabber card. The practical results are close to reality when compared to image processing. This tool makes it possible to obtain a short computation time simulation of a vision sensor. This enables interactivity either with the user or with software for the design/simulation of an industrial workshop equipped with a vision system. It makes testing possible and validates the choice of sensor placement and image processing and analysis. Thanks to this simulation tool, we can control perfectly the position of the object image placed under the camera and in this way, we can characterise the performance of subpixel accuracy determining methods for object positioning.  相似文献   

17.
目的 为减少立体图像中由于水平视差过大引起的视觉疲劳。针对实时渲染的立体视觉系统,给出了一种非均匀深度压缩方法。方法 该方法在单一相机空间内,通过不同的投影变换矩阵生成双眼图像,水平视差由投影变换来控制。为减少深度压缩造成的模型变形而带来的瑕疵,将不同深度区域内物体施以不同的压缩比例;将相机轴距表示为深度的连续函数,通过相机轴距推导出在单一相机空间内获取双眼图像的坐标变换,将深度压缩转换为模型的坐标变换,从而保证压缩比例的连续变化。结果 实验结果表明,该方法能有效提高立体图像的质量。结论 该方法简单、高效,可应用于游戏、虚拟现实等实时立体视觉系统。  相似文献   

18.
The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

19.
Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D video conversion process. In this paper, a novel method is proposed to partially solve two puzzles of DIBR, i.e. visual image generation and hole filling. The method combines two different approaches for synthesizing new views from an existing view and a corresponding depth map. Disoccluded parts of the synthesized image are first classified as either smooth or highly structured. At structured regions, inpainting is used to preserve the background structure. In other regions, an improved directional depth smoothing is used to avoid disocclusion. Thus, more details and straight line structures in the generated virtual image are preserved. The key contributions include an enhanced adaptive directional filter and a directional hole inpainting algorithm. Experiments show that the disocclusion is removed and the geometric distortion is reduced efficiently. The proposed method can generate more visually satisfactory results.  相似文献   

20.
针对传统三维建模的局限性,讨论了一种基于图像建模的技术,提出了利用普通相机和标定物对物体进行三维建模的方法,该方法利用一个在左右图像都存在的物体,对相机进行标定[1]。然后利用左右相机的相机矩阵,反算空间中的对应点和需要求取的关键点。最后利用这些点计算出来的空间位置[2],对物体进行重建,并用OpenGL进行漫游显示。实验表明,该算法计算准确,鲁棒性很高,能够满足于虚拟现实的需要。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号