首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 990 毫秒
1.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

2.
This paper presents a technique to recover geometry from time‐lapse sequences of outdoor scenes. We build upon photometric stereo techniques to recover approximate shadowing, shading and normal components allowing us to alter the material and normals of the scene. Previous work in analyzing such images has faced two fundamental difficulties: 1. the illumination in outdoor images consists of time‐varying sunlight and skylight, and 2. the motion of the sun is restricted to a near‐planar arc through the sky, making surface normal recovery unstable. We develop methods to estimate the reflection component due to skylight illumination. We also show that sunlight directions are usually non‐planar, thus making surface normal recovery possible. This allows us to estimate approximate surface normals for outdoor scenes using a single day of data. We demonstrate the use of these surface normals for a number of image editing applications including reflectance, lighting, and normal editing.  相似文献   

3.
Rendering global illumination for objects with mesostructure surfaces is a time-consuming task, and cannot presently be applied to interactive graphics. This paper presents a real-time rendering method based on a mesostructure height gradient map (MHGM) to exhibit lighting effects on meso-scale details in dynamic environments. We approximate global illumination using a lighting model including three components: incident ambient light, direct light and single bounce indirect light. MHGM is introduced to create local apex sets, which would help us to compute the three components adaptively. Our approach runs entirely on the graphics hardware, and uses deferred shading and the graphics pipeline to accelerate computation. We achieve high quality results which can render meso-scale details with approximate global illumination even for low-resolution geometric models. Moreover, our approach fully supports dynamic scenes and deformable objects.  相似文献   

4.
Video painting with space-time-varying style parameters   总被引:3,自引:0,他引:3  
Artists use different means of stylization to control the focus on different objects in the scene. This allows them to portray complex meaning and achieve certain artistic effects. Most prior work on painterly rendering of videos, however, uses only a single painting style, with fixed global parameters, irrespective of objects and their layout in the images. This often leads to inadequate artistic control. Moreover, brush stroke orientation is typically assumed to follow an everywhere continuous directional field. In this paper, we propose a video painting system that accounts for the spatial support of objects in the images or videos, and uses this information to specify style parameters and stroke orientation for painterly rendering. Since objects occupy distinct image locations and move relatively smoothly from one video frame to another, our object-based painterly rendering approach is characterized by style parameters that coherently vary in space and time. Space-time-varying style parameters enable more artistic freedom, such as emphasis/de-emphasis, increase or decrease of contrast, exaggeration or abstraction of different objects in the scene in a temporally coherent fashion.  相似文献   

5.
We propose a method for converting a single image of a transparent object into multi-view photo that enables users observing the object from multiple new angles, without inputting any 3D shape. The complex light paths formed by refraction and reflection makes it challenging to compute the lighting effects of transparent objects from a new angle. We construct an encoder–decoder network for normal reconstruction and texture extraction, which enables synthesizing novel views of transparent object from a set of new views and new environment maps using only one RGB image. By simultaneously considering the optical transmission and perspective variation, our network learns the characteristics of optical transmission and the change of perspective as guidance to the conversion from RGB colours to surface normals. A texture extraction subnetwork is proposed to alleviate the contour loss phenomenon during normal map generation. We test our method using 3D objects within and without our training data, including real 3D objects that exists in our lab, and completely new environment maps that we take using our phones. The results show that our method performs better on view synthesis of transparent objects in complex scenes using only a single-view image.  相似文献   

6.
谢宁  赵婷婷  杨阳  魏琴  Heng Tao SHEN 《软件学报》2018,29(4):1071-1084
基于学习的智能图像风格绘制是当前多媒体特别是艺术风格化领域的一个热门研究课题.图像风格学习算法主要研究利用多媒体数据及机器学习方法对真实样本数据进行自动艺术绘制智能处理.目前主流方法主要针对艺术图像的静态样本进行学习.然而,由于静态数据类中包含的信息是平面化、局部化和非连续化的,难以保证风格化处理的全局一致性.本文旨在针对创意过程中的大规模复杂多媒体数据,从序列任务学习理论上提出一套智能艺术风格绘制的理论模型、设计方法以及优化方法三个层面.本文贡献主要体现在以下研究工作:(1)提出了一套面向数字美术的多媒体数据采集设备与软件系统,(2)利用IRL算法实现了艺术风格行为的模型化及其数字化保护方法,(3)提出基于PGPE的正则化策略学习方法以提高风格学习过程的稳定性.实验结果表明本文提出的方法行之有效地实现针对具体个性风格的照片水墨画艺术风格转化.基于序列化多媒体数据采集与分析,本文提出的面向移动互联网的自动艺术风格绘制辅助系统,不仅具有理论上的创新而且还具有实际上的巨大应用和价值.  相似文献   

7.
Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently,scene understanding from a single image has made great progress. The estimated geometry,semantic labels and intrinsic components provide mostly coarse information,and are not accurate enough to re-render the whole scene. However,carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image,without any pre-recorded 3D geometry,reflectance,illumination acquisition equipment or imaging information of the image.  相似文献   

8.
StOMP algorithm is well suited to large-scale underdetermined applications in sparse vector estimations. It can reduce computation complexity and has some attractive asymptotical statistical properties.However,the estimation speed is at the cost of accuracy violation. This paper suggests an improvement on the StOMP algorithm that is more efficient in finding a sparse solution to the large-scale underdetermined problems. Also,compared with StOMP,this modified algorithm can not only more accurately estimate parameters for the distribution of matched filter coefficients,but also improve estimation accuracy for the sparse vector itself. Theoretical success boundary is provided based on a large-system limit for approximate recovery of sparse vector by modified algorithm,which validates that the modified algorithm is more efficient than StOMP. Actual computations with simulated data show that without significant increment in computation time,the proposed algorithm can greatly improve the estimation accuracy.  相似文献   

9.
We propose a novel method that automatically analyzes stroke-related artistic styles of paintings.A set of adaptive interfaces are also developed to connect the style analysis with existing painterly rendering systems, so that the specific artistic style of a template painting can be effectively transferred to the input photo with minimal effort.Different from conventional texture-synthesis based rendering techniques that focus mainly on texture features, this work extracts, analyzes and simulates high-level style features expressed by artists’ brush stroke techniques.Through experiments, user studies and comparisons with ground truth, we demonstrate that the proposed style-orientated painting framework can significantly reduce tedious parameter adjustment, and it allows amateur users to efficiently create desired artistic styles simply by specifying a template painting.  相似文献   

10.
Lambertian reflectance and linear subspaces   总被引:23,自引:0,他引:23  
We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.  相似文献   

11.
目的 本征图像分解是计算视觉和图形学领域的一个基本问题,旨在将图像中场景的纹理和光照成分分离开来。基于深度学习的本征图像分解方法受限于现有的数据集,存在分解结果过度平滑、在真实数据泛化能力较差等问题。方法 首先设计基于图卷积的模块,显式地考虑图像中的非局部信息。同时,为了使训练的网络可以处理更复杂的光照情况,渲染了高质量的合成数据集。此外,引入了一个基于神经网络的反照率图像优化模块,提升获得的反照率图像的局部平滑性。结果 将不同方法在所提的数据集上训练,相比之前合成数据集CGIntrinsics进行训练的结果,在IIW (intrinsic images in the wild)测试数据集的平均WHDR (weighted human disagreement rate)降低了7.29%,在SAW (shading annotations in the wild)测试集的AP (average precision)指标上提升了2.74%。同时,所提出的基于图卷积的神经网络,在IIW、SAW数据集上均取得了较好的结果,在视觉结果上显著优于此前的方法。此外,利用本文算法得到的本征结果,在重光照、纹理编辑和光照编辑等图像编辑任务上,取得了更优的结果。结论 所提出的数据集质量更高,有利于基于神经网络的本征分解模型的训练。同时,提出的本征分解模型由于显式地结合了非局部先验,得到了更优的本征分解结果,并通过一系列应用任务进一步验证了结果。  相似文献   

12.
In this paper we argue for our NPAR system as an effective 2D alternative to most NPR research, which is focused on frame coherent stylised rendering of 3D models. Our approach gives a highly stylised look to images without the support of 3D models. Nevertheless, they still behave as though they are animated by drawing, which they are. First, a stylised brush tool is used to freely draw extreme poses of characters. Each character is built of 2D drawn brush strokes which are manually grouped into layers. Each layer is assigned its place in a drawing hierarchy called a hierarchical display model (HDM). Next, multiple HDMs are created for the same character, each corresponding to a specific view. A collection of HDMs essentially reintroduces some correspondence information to the 2D drawings needed for inbetweening and, in effect, eliminates the need for a true 3D model. Once the models are composed the animator starts by defining keyframes from extreme poses in time. Next, brush stroke trajectories defined by the keyframe HDMs are inbetweened automatically across intermediate frames. Finally, each HDM of each generated inbetween frame is traversed and all elements are drawn one on another from back to front. Our techniques support highly rendered styles which are particularly difficult to animate by traditional means including the ‘airbrushed’, scraperboard, watercolour, Gouache, ‘ink-wash’, pastel, and the ‘crayon’ styles. In addition, we describe the data path to be followed to create highly stylised animations by incorporating real footage. We believe our system offers a new fresh perspective on computer-aided animation production and associated tools.  相似文献   

13.
目的 在实时渲染领域中,立即辐射度算法是用于实时模拟间接光泽反射效果的算法之一。基于立即辐射度的GGX SLC(stochastic light culling)算法中使用符合真实物理定律的GGX BRDF(bidirectional reflectance distribution function)光照模型计算间接光泽反射,计算复杂度很高,并且其计算开销会随着虚拟点光源的数量呈明显的线性增长。为解决上述问题,提出一种更高效的实时间接光泽反射渲染算法。方法 基于数学方法中的线性变换球面分布,将计算复杂度很高的GGX BRDF球面分布近似为一种计算复杂度较低的球面分布,并基于该球面分布提出了在单点光源以及多点光源环境下的基于物理的快速光照模型。该光照模型相比GGX BRDF光照模型具有更低的计算开销。然后基于该光照模型,提出实时间接光泽反射渲染算法,计算虚拟点光源对着色点的辐射强度,结合多点光源光照模型对着色点着色,高效地渲染间接光泽反射效果。结果 实验结果表明,改进后的实时间接光泽反射算法能够以更高的渲染效率实现与GGX SLC算法相似的渲染效果,渲染效率提升了20%~40%,并且场...  相似文献   

14.
We present a real‐time rendering algorithm for inhomogeneous, single scattering media, where all‐frequency shading effects such as glows, light shafts, and volumetric shadows can all be captured. The algorithm first computes source radiance at a small number of sample points in the medium, then interpolates these values at other points in the volume using a gradient‐based scheme that is efficiently applied by sample splatting. The sample points are dynamically determined based on a recursive sample splitting procedure that adapts the number and locations of sample points for accurate and efficient reproduction of shading variations in the medium. The entire pipeline can be easily implemented on the GPU to achieve real‐time performance for dynamic lighting and scenes. Rendering results of our method are shown to be comparable to those from ray tracing.  相似文献   

15.
A model for volume lighting and modeling   总被引:4,自引:0,他引:4  
Direct volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volumetric data and materials. For many volumes, homogeneous regions pose problems for typical gradient-based surface shading. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects that incorporates volumetric shadows, an approximation to phase functions, an approximation to forward scattering, and chromatic attenuation that provides the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for both real and synthetic volumetric data.  相似文献   

16.
Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long‐standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many‐light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation.  相似文献   

17.
18.
In this paper, we propose an interactive system for generating artistic sketches from images, based on the stylized multiresolution B-spline curve model and the livewire contour tracing paradigm. Our multiresolution B-spline stroke model allows interactive and continuous control of style and shape of the stroke at any level of details. Especially, we introduce a novel mathematical paradigm called the wavelet frame to provide essential properties for multiresolution stroke editing, such as feature point preservation, locality, time-efficiency, good approximation, etc. The livewire stroke map construction leads the user-guided stroke to automatically lock on to the target contour, allowing fast and accurate sketch drawing. We classify the target contours as outlines and interior flow, and develop two respective livewire techniques based on extended graph formulation and vector flow field. Experimental results show that the proposed system facilitates quick and easy generation of artistic sketches of various styles.  相似文献   

19.
We introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in realtime with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image‐space fitting techniques that not only extract their location, but also their profile, which permits to distinguish between sharp and smooth features. Profile parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line‐based styles.  相似文献   

20.
Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号