首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.  相似文献   

2.
PREMO is an emerging international standard for the presentations of multimedia objects including computer graphics. Open Inventor™ is a commercially available "de facto" standard for interactive computer graphics packaged as a library of objects. In this paper, we consider whether the concepts and objects of PREMO are sufficient to represent a professional quality system, such as Open Inventor.
By comparing PREMO with Open Inventor, we hope to show that PREMO's computer graphics environment model and event model can properly describe Open Inventor's rendering action and event model. The scene graph is very important in Open Inventor. Most Open Inventor functions rely on various operations over scene graphs. The construction, edition and traversal of the scene graphs are implemented as a set of newly defined PREMO objects. Graphics rendering, event handling and scene graphs constitute the fundamental parts of Open Inventor, other Open Inventor functionalities can be constructed from these. We conclude that since these three fundamental parts of Open Inventor can be properly modelled and implemented by means of PREMO, that the concepts and objects of PREMO are sufficient to represent Open Inventor.  相似文献   

3.
Photo‐realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods for advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state‐of‐the‐art in this field, and presents a categorization and comparison of current methods. Our in‐depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.  相似文献   

4.
一种高度并行的多任务并行绘制系统结构   总被引:2,自引:0,他引:2  
随着计算机图形技术的实用化,需要构造更逼真、更精细的三维复杂场景,其数据规模日益膨胀,加上对场景的实时交互的要求也越来越高,人们对多屏幕高分辨率显示的需求与日俱增,迫切需要一种针对大规模复杂场景的多任务并行图形绘制系统。本文介绍了一种适用于大规模复杂场景的高度并行的多任务多屏幕并行图形绘制系统的体系结构,支持图形任务的并行化处理和多屏幕显示。该系统结构将几何计算任务与图形绘制任务相分离,分剐进行并行化处理,在计算节点按绘制对象类型对任务进行分类以便于并行计算和任务分配,在绘制节点对各个小块屏幕图形进行并行合成。实验测试结果表明,该系统结构对多任务具有较好的并行效率和可扩展性,能够充分利用系统的并行计算资源,达到较好的绘制效果。  相似文献   

5.
BH_GRAPH是一个面向视景仿真类应用系统开发人员、支持实时三维图形开发与运行的基础软件平台.它提供可扩展的软件体系结构、标准化的场景管理机制、高效率的场景处理方法、方便易用的应用程序接口,为三维图形应用系统的快速开发、高效运行提供完整的技术支撑.BH_GRAPH主要由三维视景绘制引擎、三维对象建模工具、三维场景布置工具以及一系列关键技术构成.概要介绍了BH—GRAPH各主要组成部分的软件结构、基本功能和技术特点.  相似文献   

6.
大规模复杂场景交互绘制技术综述   总被引:2,自引:0,他引:2  
大规模复杂场景的快速绘制是虚拟现实、实时仿真和三维交互设计等许多重要应用的底层支撑技术,也是诸多研究领域面临的一个基本问题.随着近几年三维扫描和建模技术的飞速发展,三维场景的规模和复杂度不断增大,大规模复杂场景的交互绘制受到了国内外研究者越来越多的重视并取得了一系列研究成果.首先简要回顾了大规模复杂场景交互绘制的研究进展情况;然后通过对其中涉及的主要关键技术进行总结分析,并对国内外典型的绘制系统进行比较和分类,阐述了大规模复杂场景交互绘制的主要研究内容,给出了大规模复杂场景交互绘制系统所应包含的基本组成部分和一般框架;最后对今后的发展方向做出了展望.  相似文献   

7.
We present a novel framework for real-time multi-perspective rendering. While most existing approaches are based on ray-tracing, we present an alternative approach by emulating multi-perspective rasterization on the classical perspective graphics pipeline. To render a general multi-perspective camera, we first decompose the camera into piecewise linear primitive cameras called the general linear cameras or GLCs. We derive the closed-form projection equations for GLCs and show how to rasterize triangles onto GLCs via a two-pass rendering algorithm. In the first pass, we compute the GLC projection coefficients of each scene triangle using a vertex shader. The linear raster on the graphics hardware then interpolates these coefficients at each pixel. Finally, we use these interpolated coefficients to compute the projected pixel coordinates using a fragment shader. In the second pass, we move the pixels to their actual projected positions. To avoid holes, we treat neighboring pixels as triangles and re-render them onto the GLC image plane. We demonstrate our real-time multi-perspective rendering framework in a wide range of applications including synthesizing panoramic and omnidirectional views, rendering reflections on curved mirrors, and creating multi-perspective faux animations. Compared with the GPU-based ray tracing methods, our rasterization approach scales better with scene complexity and it can render scenes with a large number of triangles at interactive frame rates.  相似文献   

8.
Real-time three-dimensional (3D) graphics is emerging rapidly in multimedia applications, but it suffers from requirements for huge computation, high bandwidth, and large buffer. In order to achieve hardware efficiency for 3D graphics rendering, we propose a novel approach named index rendering. The basic concept of index rendering is to realize a 3D rendering pipeline by using asynchronous multi-dataflows. Triangle information can be divided into several parts with each part capable of being transferred independently and asynchronously. Finally, all data are converged by the index to generate the final image. The index rendering approach can eliminate unnecessary operations in the traditional 3D graphics pipeline, the unnecessary operations are caused by the invisible pixels and triangles in the 3D scene. Previous work, deferred shading, eliminates the operations relating to invisible pixels, but it requires huge tradeoffs in bandwidth and buffer size. With index rendering, we can eliminate operations on both invisible pixels and triangles with fewer tradeoffs as compared with the deferred shading approach. The simulation and analysis results show that the index rendering approach can reduce 10%-70% of lighting operations when using the flat and Gouraud shading process and decrease 30%-95% when using Phong shading. Furthermore, it saves 70% of buffer size and 50%-70% of bandwidth compared with the deferred shading approach. The result also indicates that this approach of index rendering is especially suitable for low-cost portable rendering devices. Hence, index rendering is a hardware-efficient architecture for 3D graphics, and it makes rendering hardware more easily integrated into multimedia systems, especially system-on-a-chip (SOC) designs.  相似文献   

9.
俞益洲 《计算机学报》2000,23(9):898-898
Image- based modeling and rendering techniques greatly advanced the level of photorealism incomputer graphics.They were originally proposed to accelerate rendering with the ability to varyviewpoint only.My work in this area focused on capturing and modeling real scenes for novel visual interactionssuch as varying lighting condition and scene configuration in addition to viewpoint.Thiswork can leadto Applications such as virtual navigation of a real scene,interaction with the scene,novel scenec…  相似文献   

10.
近年来,三维虚拟场景的规模和复杂程度不断提高,受到硬件的限制,一些应用 中的超大规模场景(如建筑群,城市等)很难在单机上进行渲染或满足可交互的需求。针对该问 题,提出了一种分布式渲染框架,将大规模场景在内容上进行划分,得到单一节点可渲染的子 场景。这些子场景被分布到集群中不同的渲染节点进行处理,其渲染结果根据深度信息进行合 成得到整个场景的最终渲染结果。为了降低交互响应时间,需对子场景的渲染结果进行压缩传 输。实验充分验证了提出的分布式渲染系统能够高效处理超大规模场景的渲染和交互,并且具 有良好的可扩展性,能够满足很多领域中对大规模场景交互式渲染的需求。  相似文献   

11.
Rendering of large-scale forest scenes is a challenging task,whose highly geometric complexity will put heavy burden on current graphics hardware.When navigating the scene,the overall visual result is generally considered as the core concern.A new method is proposed in this paper for large-scale forest rendering using clustering and merging strategies.Our method improves the rendering effect by clustering polygons according to the point information with relation to neighbours.A fast forest rendering system is developed accordingly.The relative techniques in the system can improve the visual quality on demand of different applications.  相似文献   

12.
Progressive light transport simulations aspire a physically‐based, consistent rendering to obtain visually appealing illumination effects, depth and realism. Thereby, the handling of large scenes is a difficult problem, as in typical scene subdivision approaches the parallel processing requires frequent synchronization due to the bouncing of light throughout the scene. In practice, however, only few object parts noticeably contribute to the radiance observable in the image, whereas large areas play only a minor role. In fact, a mesh simplification of the latter can go unnoticed by the human eye. This particular importance to the visible radiance in the image calls for an output‐sensitive mesh reduction that allows to render originally out‐of‐core scenes on a single machine without swapping of memory. Thus, in this paper, we present a preprocessing step that reduces the scene size under the constraint of radiance preservation with focus on high‐frequency effects such as caustics. For this, we perform a small number of preliminary light transport simulation iterations. Thereby, we identify mesh parts that contribute significantly to the visible radiance in the scene, and which we thus preserve during mesh reduction.  相似文献   

13.
The purpose of this work is the semantic visualization of complex 3D city models containing numerous dynamic entities, as well as performing interactive semantic walkthroughs and flights without predefined paths. This is achieved by using a 3D multilayer scene graph that integrates geometric and semantic information as well as by the performance of efficient geometric and what we call semantic view culling. The proposed semantic-geometric scene graph is a 3D structure composed of several layers which is suitable for visualizing geometric data with semantic meaning while the user is navigating inside the 3D city model. BqR-Tree is the data structure specially developed for the geometric layer for the purpose of speeding up rendering time in urban scenes. It is an improved R-Tree data structure based on a quadtree spatial partitioning which improves the rendering speed of the usual R-trees when view culling is implemented in urban scenes. The BqR-Tree is defined by considering the city block as the basic and logical unit. The advantage of the block as opposed to the traditional unit, the building, is that it is easily identified regardless of the data source format, and allows inclusion of mobile and semantic elements in a natural way. The usefulness of the 3D scene graph has been tested with low structured data, which makes its application appropriate to almost all city data containing not only static but dynamic elements as well.  相似文献   

14.
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantadges over previous approaches, present example configurations and usage scenarios as well as scalability results.  相似文献   

15.
This paper presents an interactive system for quickly designing and previewing colored snapshots of indoor scenes. Different from high-quality 3D indoor scene rendering, which often takes several minutes to render a moderately complicated scene under a specific color theme with high-performance computing devices, our system aims at improving the effectiveness of color theme design of indoor scenes and employs an image colorization approach to efficiently obtain high-resolution snapshots with editable colors. Given several pre-rendered, multi-layer, gray images of the same indoor scene snapshot, our system is designed to colorize and merge them into a single colored snapshot. Our system also assists users in assigning colors to certain objects/components and infers more harmonious colors for the unassigned objects based on pre-collected priors to guide the colorization. The quickly generated snapshots of indoor scenes provide previews of interior design schemes with different color themes, making it easy to determine the personalized design of indoor scenes. To demonstrate the usability and effectiveness of this system, we present a series of experimental results on indoor scenes of different types, and compare our method with a state-of-the-art method for indoor scene material and color suggestion and offline/online rendering software packages.  相似文献   

16.
Digital home application market shifts just about every month. This means risk for developers struggling to adapt their applications to several platforms and marketplaces while changing how people experience and use their TVs, smartphones and tablets. New ubiquitous and context-aware experiences through interactive 3D applications on these devices engage users to interact with virtual applications with complex 3D scenes. Interactive 3D applications are boosted by emerging standards such as HTML5 and WebGL removing limitations, and transforming the Web into a real application framework to tackle interoperability over the heterogeneous digital home platforms. Developers can apply their knowledge of web-based solutions to design digital home applications, removing learning curve barriers related to platform-specific APIs. However, constraints to render complex 3D environments are still present especially in home media devices. This paper provides a state-of-the-art survey of current capabilities and limitations of the digital home devices and describes a latency-driven system design based on hybrid remote and local rendering architecture, enhancing the interactive experience of 3D graphics on these thin devices. It supports interactive navigation of high complexity 3D scenes while provides an interoperable solution that can be deployed over the wide digital home device landscape.  相似文献   

17.
In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists.  相似文献   

18.
Ambient occlusion has proven to be a useful tool for producing realistic images, both in offline rendering and interactive applications. In production rendering, ambient occlusion is typically computed by casting a large number of short shadow rays from each visible point, yielding unparalleled quality but long rendering times. Interactive applications typically use screen‐space approximations which are fast but suffer from systematic errors due to missing information behind the nearest depth layer. In this paper, we present two efficient methods for calculating ambient occlusion so that the results match those produced by a ray tracer. The first method is targeted for rasterization‐based engines, and it leverages the GPU graphics pipeline for finding occlusion relations between scene triangles and the visible points. The second method is a drop‐in replacement for ambient occlusion computation in offline renderers, allowing the querying of ambient occlusion for any point in the scene. Both methods are based on the principle of simultaneously computing the result of all shadow rays for a single receiver point.  相似文献   

19.
高度复杂植物场景的构造和真实感绘制   总被引:14,自引:1,他引:14  
绘制高度真实感的自然场景是计算机图形学研究领域的一个富有挑战性的难题,植物对象比如草地和树木是虚拟自然场景的重要组成部分,植物类繁多,形态各异,复杂的结构使其无论在造形、存储还是在绘制上都存在相当的困难,为了解决这个难题,根据植在特点,分别采用多边形,纹元和体纹理作为工餐来构成了不同的植物,并且分别给出了不同植物场景的简单构造方法,绘制的结果证明了这些构造和绘制方法是有效可行的。  相似文献   

20.
This paper presents a method for modelling graphics scenes consisting of multiple volumetric objects. A two-level hierarchical representation is employed, which enables the reduction of the overall storage consumption as well as rendering time. With this approach, different objects can be derived from the same volumetric dataset, and 2D images can be trivially integrated into a scene. The paper also describes an efficient algorithm for rendering such scenes on ordinary workstations, and addresses issues concerning memory requirements and disk swapping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号