首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
基于增强现实的虚拟实景空间的研究与实现   总被引:1,自引:0,他引:1  
增强现实可以把计算机产生的虚拟物体或其他信息合成到用户看到的真实场景中.本文将增强现实技术与基于图像的绘制技术有机的结合起来,研究并实现了一个基于增强现实技术的虚拟实景空间系统.首先介绍了虚拟实景空间的构造,然后着重探讨了虚拟物体与实景空间的合成方法,主要解决合成中的几何一致性和光照一致性问题。  相似文献   

2.
增强现实技术可以把计算机产生的虚拟物体或其它信息合成到用户看到的真实场景中.将增强现实技术与基于图像的绘制技术有机的结合起来,介绍了虚拟实景空间的构造,探讨了虚拟物体与实景空间的合成方法,给出了一种在虚拟实景空间中漫游的机制及其实现方法,综合考虑了机制的复杂度和漫游的自由度,在两者之间取得了平衡,该方法具有较好的实用性.  相似文献   

3.
基于增强现实技术的景观规划系统   总被引:11,自引:0,他引:11       下载免费PDF全文
增强现实是把计算机产生的虚拟物体或其他信息合成到用户看到的真实世界中的一种技术,利用增强现实技术,设计人员能够有效地进行建筑布局的规划和设计,本文介绍了一个典型的增强现实系统-景观规划系统的实现,针对合成虚拟景物和真实场景过程中需要解决的几何一致性和光照一致性问题,采用了局部三维重建和交互的相机定标技术来实现几何一致性;对于光照一致性,则采用了光照交互指定和自动恢复相结合的方法,同时还考虑了虚拟物体投射的阴影,针对不同场景的实际应用,依据系统生成的结果合成图象,设计人员能够比较有效地完成对景观规划结果的直观评估。  相似文献   

4.
虚拟漫游系统中调度算法的研究与实现   总被引:3,自引:0,他引:3  
喻罡  崔杜武  王竹荣 《计算机工程》2002,28(12):115-117,202
针对虚拟实景空间漫游系统中资源调度问题进行了研究,提出了相关算法,这些算法充分利用计算机的资源,合理地调度虚拟对象,能在漫游中平滑无延迟地浏览实景图像,算法已在软件中实现,效果较佳。  相似文献   

5.
虚拟实景空间是指以实景图像或视频为素材构造出的虚拟空间.与传统的以三维造 型为素材,由计算机实时绘制出的虚拟空间相比,虚拟实景空间不需要复杂的三维造型,对计 算机计算能力要求低.虚拟实景空间支持用户在其中前进、后退、仰视、俯视、360度环视等操 作.总结了虚拟实景空间构造系统HVS的空间模型与部分关键技术.  相似文献   

6.
虚拟实景空间系统HVS由计算机自动拼接、变形与组织许多幅离散的实景图象或连续的视频,生成虚拟空间。这种虚拟空间具有照片质量的视觉效果,称为虚拟实景空间。虚拟实景空间能为用户提供前进、后退、仰视、俯视、360度环境、近看、远看等漫游能力,可运行于PC平台。HVS是虚拟实景空间的生成与漫游平台。将介绍它的模型、组成与实现。  相似文献   

7.
虚拟实景空间实时漫游的缓存与调度策略   总被引:6,自引:0,他引:6  
集成各种媒体信息的虚拟现实空间必须首先能够提供一种具有沉浸感的实时空间漫游机制 ,保证用户在漫游过程中获得平滑的视觉效果 ,并能维持用户视觉的一致性 .本文结合虚拟实景空间的信息组织方式 ,分析了在虚拟实景环境中进行实时漫游需要解决的内存管理和线程同步与调度问题 ,设计实现了一种实时多线程漫游机制 .  相似文献   

8.
虚拟实景空间系统HVS的研究与实现   总被引:2,自引:0,他引:2  
虚拟实景空间系统HVS由计算机自动拼接,变形与组织许多幅离散的实景图像或连续的视频,生成虚拟空间。这种虚拟空间具有照片质量的视觉效果,称为虚拟实景空间。虑拟实景空间能为用户提供前进,后退,仰视,俯视,360度环视,近看,远看等漫游能力,可运行于PC平台,HVS是虚拟实景空间的生成与漫游平台。将介绍它的模型,组成与实现。  相似文献   

9.
虚拟实景空间的漫游机制及其实现方法   总被引:4,自引:0,他引:4  
虚拟实景空间是以实景图像为素材构造出的虚拟空间。在基于图像的虚拟空间中,自由漫游是目前虚拟现实领域研究的热点。文章给出了一种在虚拟实景空间中漫游的机制及其实现方法,综合考虑了机制的复杂度和漫游的自由度,在两者之间取得了平衡,具有较好的实用性。  相似文献   

10.
HVS:一种基于实景图象的虚拟现实系统   总被引:14,自引:4,他引:10  
大多数的虚拟现实比试在于计算机图形技术,先由多边形构造虚拟场景的三维几何模型,再由计算机根据用户的观察点和观察方向实时绘制出用户所看到的虚拟场景,HVS 另一种思路,由计算机自动拉接、变形与组织许多幅度散的实景图象或连续视频,生成虚拟场景,这种虚拟场人有照片质量的视觉效果,被我们称为虚拟实景空间,虚拟实时空间能为用户提供、后退、仰视、俯视、360度环境、近看、远看等漫游能力,运行它不需要高性能的图  相似文献   

11.
We propose a novel approach to simulate the illumination of augmented outdoor scene based on a legacy photograph. Unlike previous works which only take surface radiosity or lighting related prior information as the basis of illumination estimation, our method integrates both of these two items. By adopting spherical harmonics, we deduce a linear model with only six illumination parameters. The illumination of an outdoor scene is finally calculated by solving a linear least square problem with the color constraint of the sunlight and the skylight. A high quality environment map is then set up, leading to realistic rendering results. We also explore the problem of shadow casting between real and virtual objects without knowing the geometry of objects which cast shadows. An efficient method is proposed to project complex shadows (such as tree's shadows) on the ground of the real scene to the surface of the virtual object with texture mapping. Finally, we present an unified scheme for image composition of a real outdoor scene with virtual objects ensuring their illumination consistency and shadow consistency. Experiments demonstrate the effectiveness and flexibility of our method.  相似文献   

12.
分布式虚拟环境是模拟现实世界的虚拟空间,对因果一致性控制具有实时性要求,必须在事件生命周期结束前得到维护.然而,在大规模网络条件下,网络传输高延迟和动态性会导致部分事件不能及时到达,使已传到事件间因果关系无法在生命周期限制内有效传递.在现有方法中,部分方法基于所有事件一定能及时传到的假设,没有考虑生命周期对因果关系的制约;而另一部分方法虽然考虑了生命周期的约束,但其因果关系传递要求仿真时钟精确同步,且因果控制效率随系统规模的扩大而快速降低,限制了虚拟环境的普适性和实时性.提出了生命周期约束下的因果一致性控制方法LCO,突破了异步时钟间的时间值比较、多路径因果控制信息选择的终止条件、网络状况敏感的因果控制信息动态调节等关键技术,能够在事件无法及时传到时,仍可以根据已传到的事件计算出因果传递关系.实验证明,LCO既能维护生命周期内的因果一致性,又使因果控制信息量与系统规模无关,降低网络传输和计算开销.  相似文献   

13.
Augmented reality systems allow users to interact with real and computer-generated objects by displaying 3D virtual objects registered in a user's natural environment. Applications of this powerful visualization tool include previewing proposed buildings in their natural settings, interacting with complex machinery for purposes of construction or maintenance training, and visualizing in-patient medical data such as ultrasound. In all these applications, computer-generated objects must be visually registered with respect to real-world objects in every image the user sees. If the application does not maintain accurate registration, the computer-generated objects appear to float around in the user's natural environment without having a specific 3D spatial position. Registration error is the observed displacement in the image between the actual and intended positions of virtual objects  相似文献   

14.
Occlusion in collaborative augmented environments   总被引:1,自引:0,他引:1  
Augmented environments superimpose computer enhancements on the real world. Such augmented environments are well suited for collaboration of multiple users. To improve the quality and consistency of the augmentation the occlusion of real objects by computer-generated objects and vice versa has to be implemented. We present methods how this can be done for a tracked user's body and other real objects and how irritating artifacts due to misalignments can be reduced. Our method is based on simulating the occlusion of virtual objects by a representation of the user modeled as kinematic chains of articulated solids. Smoothing the border between virtual world and occluding real reduces registration and modeling errors of this model. Finally, an implementation in our augmented environment and the resulting improvements are presented.  相似文献   

15.
Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently,scene understanding from a single image has made great progress. The estimated geometry,semantic labels and intrinsic components provide mostly coarse information,and are not accurate enough to re-render the whole scene. However,carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image,without any pre-recorded 3D geometry,reflectance,illumination acquisition equipment or imaging information of the image.  相似文献   

16.
StOMP algorithm is well suited to large-scale underdetermined applications in sparse vector estimations. It can reduce computation complexity and has some attractive asymptotical statistical properties.However,the estimation speed is at the cost of accuracy violation. This paper suggests an improvement on the StOMP algorithm that is more efficient in finding a sparse solution to the large-scale underdetermined problems. Also,compared with StOMP,this modified algorithm can not only more accurately estimate parameters for the distribution of matched filter coefficients,but also improve estimation accuracy for the sparse vector itself. Theoretical success boundary is provided based on a large-system limit for approximate recovery of sparse vector by modified algorithm,which validates that the modified algorithm is more efficient than StOMP. Actual computations with simulated data show that without significant increment in computation time,the proposed algorithm can greatly improve the estimation accuracy.  相似文献   

17.
The sense of being within a three-dimensional (3D) space and interacting with virtual 3D objects in a computer-generated virtual environment (VE) often requires essential image, vision and sensor signal processing techniques such as differentiating and denoising. This paper describes novel implementations of the Gaussian filtering for characteristic signal extraction and wavelet-based image denoising algorithms that run on the graphics processing unit (GPU). While significant acceleration over standard CPU implementations is obtained through exploiting data parallelism provided by the modern programmable graphics hardware, the CPU can be freed up to run other computations more efficiently such as artificial intelligence (AI) and physics. The proposed GPU-based Gaussian filtering can extract surface information from a real object and provide its material features for rendering and illumination. The wavelet-based signal denoising for large size digital images realized in this project provided better realism for VE visualization without sacrificing real-time and interactive performances of an application.  相似文献   

18.
增强现实技术的目的在于将计算机生成的虚拟物体叠加到真实场景中。实现良好的虚实融合需要对场景光照进行估算,针对高光场景,利用场景中的不同反射光信息对场景进行有效的光照估计,首先通过基于像素聚类方法的图像分解对图像进行反射光的分解,得到漫反射图和镜面反射图,对漫反射图进行进一步的本征图像分解,得到反照率图和阴影图;之后结合分解结果和场景深度对输入图像的光照信息进行计算;最后使用全局光照模型对虚拟物体进行渲染,可以得到虚实场景高度融合的光照效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号