首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Given enough CPU time, present graphics technology can render near‐photorealistic images. However, for real‐time graphics applications such as virtual reality systems, developers must make explicit programming decisions, trading off rendering quality for interactive update rates. In this papar we present a new algorithm for rendering complex 3D models at near‐interactive rates that can be used in virtual environments composed of static or dynamic scenes. The algorithm integretes the techniques of level of details (LoD), visibility computation and object impostor. The method is more suitable for very dynamic scenes with high depth complexity. We introduce a new criterion to identify the occluder and the occludee: the object that can be replaced by its LoD model and the one that can be replaced by its impostor. The efficiency of our algorithm is then illustrated by experimental results. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
基于分形理论的三维地形场景的真实感绘制   总被引:3,自引:0,他引:3  
英振华  石锐  胡捷 《计算机仿真》2006,23(5):160-162
如何在PC机上进行三维地形场景的真实感绘制一直是一个挑战性课题。三维地形场景的真实感绘制追求的两个目标是绘制的逼真度和绘制的实时性,需要在不明显降低图像质量的条件下保持较高的交互帧速率。该文运用分形理论生成三维地形场景,为了满足实时性的要求,采用连续LOD算法和其它技术来加速地形绘制;为了满足逼真度的需求,采用纹理映射和光照映射。这种方法可以在普通微机上基于分形理论实现三维场景的真实感绘制,达到实时性和逼真度的要求。  相似文献   

3.
This paper presents an interactive system for quickly designing and previewing colored snapshots of indoor scenes. Different from high-quality 3D indoor scene rendering, which often takes several minutes to render a moderately complicated scene under a specific color theme with high-performance computing devices, our system aims at improving the effectiveness of color theme design of indoor scenes and employs an image colorization approach to efficiently obtain high-resolution snapshots with editable colors. Given several pre-rendered, multi-layer, gray images of the same indoor scene snapshot, our system is designed to colorize and merge them into a single colored snapshot. Our system also assists users in assigning colors to certain objects/components and infers more harmonious colors for the unassigned objects based on pre-collected priors to guide the colorization. The quickly generated snapshots of indoor scenes provide previews of interior design schemes with different color themes, making it easy to determine the personalized design of indoor scenes. To demonstrate the usability and effectiveness of this system, we present a series of experimental results on indoor scenes of different types, and compare our method with a state-of-the-art method for indoor scene material and color suggestion and offline/online rendering software packages.  相似文献   

4.
基于图象的室内虚拟漫游系统   总被引:6,自引:0,他引:6       下载免费PDF全文
基于图象的绘制(Image Based Rendering)作为一种全新的图形绘制方式,以其相对于传统几何绘制而言,具有高效实用的优点,近年来得到了研究人员越来越多的关注,但IBR技术仍存在一些主要难点,如图象的无缝拼接和实时漫游。针对此问题,开发了一个基于部分球面模型的室内虚拟漫游系统。该系统采用自动匹配和人机交互相结合的方法,可以无缝地将多幅照片拼接成一张全景图,同时采用一种改进的基于查找表的算法,实现了固定视点的实时漫游。  相似文献   

5.
Recent advances in sensing and software technologies enable us to obtain large-scale, yet fine 3D mesh models of cultural assets. However, such large models cannot be displayed interactively on consumer computers because of the performance limitation of the hardware. Cloud computing technology is a solution that can process a very large amount of information without adding to each client user’s processing cost. In this paper, we propose an interactive rendering system for large 3D mesh models, stored on a remote environment through a network of relatively small capacity machines, based on the cloud computing concept. Our system uses both model- and image-based rendering methods for efficient load balance between a server and clients. On the server, the 3D models are rendered by the model-based method using a hierarchical data structure with Level of Detail (LOD). On the client, an arbitrary view is constructed by using a novel image-based method, referred to as the Grid-Lumigraph, which blends colors from sampling images received from the server. The resulting rendering system can efficiently render any image in real time. We implemented the system and evaluated the rendering and data transferring performance.  相似文献   

6.
We present an image‐based rendering system to viewpoint‐navigate through space and time of complex real‐world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multivideo footage as input. Inexpensive, consumer‐grade camcorders suffice to acquire arbitrary scenes, for example in the outdoors, without elaborate recording setup procedures, allowing also for hand‐held recordings. Instead of scene depth estimation, layer segmentation or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion or freeze‐and‐rotate effects can all be created in the same way. Acquisition simplification, integration of moving cameras, generalization to difficult scenes and space–time symmetric interpolation amount to a widely applicable virtual video camera system.  相似文献   

7.
Special relativistic visualization offers the possibility of experiencing the optical effects of traveling near the speed of light, including apparent geometric distortions as well as Doppler and searchlight effects. Early high-quality computer graphics images of relativistic scenes were created using offline, computationally expensive CPU-side 4D ray tracing. Alternate approaches such as image-based rendering and polygon-distortion methods are able to achieve interactivity, but exhibit inferior visual quality due to sampling artifacts. In this paper, we introduce a hybrid rendering technique based on polygon distortion and local ray tracing that facilitates interactive high-quality visualization of multiple objects moving at relativistic speeds in arbitrary directions. The method starts by calculating tight image-space footprints for the apparent triangles of the 3D scene objects. The final image is generated using a single image-space ray tracing step incorporating Doppler and searchlight effects. Our implementation uses GPU shader programming and hardware texture filtering to achieve high rendering speed.  相似文献   

8.
A method for capturing geometric features of real-world scenes relies on a simple capture setup modification. The system might conceivably be packaged into a portable self-contained device. The multiflash imaging method bypasses 3D geometry acquisition and directly acquires depth edges from images. In the place of expensive, elaborate equipment for geometry acquisition, we use a camera with multiple strategically positioned flashes. Instead of having to estimate the full 3D coordinates of points in the scene (using, for example, 3D cameras) and then look for depth discontinuities, our technique reduces the general 3D problem of depth edge recovery to one of 2D intensity edge detection. Our method could, in fact, help improve current 3D cameras, which tend to produce incorrect results near depth discontinuities. Exploiting the imaging geometry for rendering provides a simple and inexpensive solution for creating stylized images from real scenes. We believe that our camera will be a useful tool for professional artists and photographers, and we expect that it will also let the average user easily create stylized imagery. This article is available with a short video documentary on CD-ROM.  相似文献   

9.
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.  相似文献   

10.
高质量的三维纹理硬件体绘制   总被引:1,自引:0,他引:1       下载免费PDF全文
与光线投射法相比,传统的3D纹理体绘制算法通常难以产生高质量的图像。为了增强渲染图像的真实感与质量,在基于GPU(Graphics Processing Unit)的三维纹理体绘制过程中以交互的速率实现了体阴影效果,并考虑现实图像合成中的可视化感知,提出将基于GPU的高动态范围色调映射技术应用到体绘制得到的结果图片中。最后对一些体数据集进行绘制,实验表明这些技术较好地解决了传统纹理绘制方法的缺点,提高了图像的质量。  相似文献   

11.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

12.
While many methods exist for simulating diffuse light inter-reflections, relatively few of them are adapted to dynamic scenes. Despite approximations made on the formal rendering equation, managing dynamic environments at interactive or real-time frame rates still remains one of the most challenging problems. This paper presents a lighting simulation system based on photon streaming, performed continuously on the central processor unit. The power corresponding to each photon impact is accumulated onto predefined points, called virtual light accumulators (or VLA). VLA are used during the rendering phase as virtual light sources. We also introduce a priority management system that automatically adapts to brutal changes during lighting simulation (for instance due to visibility changes or fast object motion). Our system naturally benefits from multi-core architecture. The rendering process is performed in real time using a graphics processor unit, independently from the lighting simulation process. As shown in the results, our method provides high framerates for dynamic scenes, with moving viewpoint, objects and light sources.  相似文献   

13.

Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.

  相似文献   

14.
BH_GRAPH是一个面向视景仿真类应用系统开发人员、支持实时三维图形开发与运行的基础软件平台.它提供可扩展的软件体系结构、标准化的场景管理机制、高效率的场景处理方法、方便易用的应用程序接口,为三维图形应用系统的快速开发、高效运行提供完整的技术支撑.BH_GRAPH主要由三维视景绘制引擎、三维对象建模工具、三维场景布置工具以及一系列关键技术构成.概要介绍了BH—GRAPH各主要组成部分的软件结构、基本功能和技术特点.  相似文献   

15.
In this paper, we focus on efficient compression and streaming of frames rendered from a dynamic 3D model. Remote rendering and on‐the‐fly streaming become increasingly attractive for interactive applications. Data is kept confidential and only images are sent to the client. Even if the client's hardware resources are modest, the user can interact with state‐of‐the‐art rendering applications executed on the server. Our solution focuses on augmented video information, e.g., by depth, which is key to increase robustness with respect to data loss, image reconstruction, and is an important feature for stereo vision and other client‐side applications. Two major challenges arise in such a setup. First, the server workload has to be controlled to support many clients, second the data transfer needs to be efficient. Consequently, our contributions are twofold. First, we reduce the server‐based computations by making use of sparse sampling and temporal consistency to avoid expensive pixel evaluations. Second, our data‐transfer solution takes limited bandwidths into account, is robust to information loss, and compression and decompression are efficient enough to support real‐time interaction. Our key insight is to tailor our method explicitly for rendered 3D content and shift some computations on client GPUs, to better balance the server/client workload. Our framework is progressive, scalable, and allows us to stream augmented high‐resolution (e.g., HD‐ready) frames with small bandwidth on standard hardware.  相似文献   

16.
Existing algorithms for rendering subsurface scattering in real time cannot deal well with scattering over longer distances. Kernels for image space algorithms become very large in these circumstances and separation does not work anymore, while geometry-based algorithms cannot preserve details very well. We present a novel approach that deals with all these downsides. While for lower scattering distances, the advantages of geometry-based methods are small, this is not the case anymore for high scattering distances (as we will show). Our proposed method takes advantage of the highly detailed results of image space algorithms and combines it with a geometry-based method to add the essential scattering from sources not included in image space. Our algorithm does not require pre-computation based on the scene's geometry, it can be applied to static and animated objects directly. Our method is able to provide results that come close to ray-traced images which we will show in direct comparisons with images generated by PBRT. We will compare our results to state of the art techniques that are applicable in these scenarios and will show that we provide superior image quality while maintaining interactive rendering times.   相似文献   

17.
交互式虚拟内窥镜系统   总被引:7,自引:0,他引:7       下载免费PDF全文
计算机图形图象技术与虚拟现实技术应用于医学内窥镜系统,产生了虚拟内窥镜技术,为了将虚拟现实技术应用在医学图象处理方面,以方便医生进行虚拟手术与无创诊断,在综合利用各种计算机图形,图象技术的基础上,提出了完整的交互式虚拟内窥镜系统的框架,同时对系统结构和各种模型进行了分析和讨论,还针对系统的实时性和绘制结果的逼真性要求,提出了基于Object Cache和扩展的区域增长方法,并将其应用到医学图象处理当中,得到了较好的效果,该系统较好地解决了虚拟现实与可视化实时性和绘制精度两方面的要求,从而为医学图象可视化提供了有力的工具。  相似文献   

18.
In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space–time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.  相似文献   

19.
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette-defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.  相似文献   

20.
Although new graphics hardware has accelerated the rendering process, the realistic simulation of scenes including participating media remains a difficult problem. Interactive results have been achieved for isotropic media as well as for single scattering. In this paper, we present an interactive global illumination algorithm for the simulation of scenes that include participating media, even anisotropic and/or inhomogeneous media. The position of the observer is important in order to render inhomogeneous media according to the transport equation. Previous work normally needed to be ray-based in order to compute this equation properly. Our approach is capable of achieving real time using two 3D textures on a simple desktop PC. For anisotropic participating media we combine density estimation techniques and graphics hardware capabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号