首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The wide adoption of path‐tracing algorithms in high‐end realistic rendering has stimulated many diverse research initiatives. In this paper we present a coherent survey of methods that utilize Monte Carlo integration for estimating light transport in scenes containing participating media. Our work complements the volume‐rendering state‐of‐the‐art report by Cerezo et al. [ CPP*05 ]; we review publications accumulated since its publication over a decade ago, and include earlier methods that are key for building light transport paths in a stochastic manner. We begin by describing analog and non‐analog procedures for free‐path sampling and discuss various expected‐value, collision, and track‐length estimators for computing transmittance. We then review the various rendering algorithms that employ these as building blocks for path sampling. Special attention is devoted to null‐collision methods that utilize fictitious matter to handle spatially varying densities; we import two “next‐flight” estimators originally developed in nuclear sciences. Whenever possible, we draw connections between image‐synthesis techniques and methods from particle physics and neutron transport to provide the reader with a broader context.  相似文献   

2.
This paper presents efficient algorithms for free path sampling in heterogeneous participating media defined either by high‐resolution voxel arrays or generated procedurally. The method is based on the concept of mixing ‘virtual’ material or particles to the medium, augmenting the extinction coefficient to a function for which the free path can be sampled in a straightforward way. The virtual material is selected such that it modifies the volume density but does not alter the radiance. We define the total extinction coefficient of the real and virtual particles by a low‐resolution grid of super‐voxels that are much larger than the real voxels defining the medium. The computational complexity of the proposed method depends just on the resolution of the super‐voxel grid and does not grow with the resolution above the scale of super‐voxels. The method is particularly efficient to render large, low‐density, heterogeneous volumes, which should otherwise be defined by enormously high resolution voxel grids and where the average free path length would cross many voxels.  相似文献   

3.
Photo‐realistic rendering of inhomogeneous participating media with light scattering in consideration is important in computer graphics, and is typically computed using Monte Carlo based methods. The key technique in such methods is the free path sampling, which is used for determining the distance (free path) between successive scattering events. Recently, it has been shown that efficient and unbiased free path sampling methods can be constructed based on Woodcock tracking. The key concept for improving the efficiency is to utilize space partitioning (e.g., kd‐tree or uniform grid), and a better space partitioning scheme is important for better sampling efficiency. Thus, an estimation framework for investigating the gain in sampling efficiency is important for determining how to partition the space. However, currently, there is no estimation framework that works in 3D space. In this paper, we propose a new estimation framework to overcome this problem. Using our framework, we can analytically estimate the sampling efficiency for any typical partitioned space. Conversely, we can also use this estimation framework for determining the optimal space partitioning. As an application, we show that new space partitioning schemes can be constructed using our estimation framework. Moreover, we show that the differences in the performances using different schemes can be predicted fairly well using our estimation framework.  相似文献   

4.
This article is concentrated on the particle filtering problem for nonlinear systems with nonlinear equality constraints. Considering the constraint information incorporated into filters can improve the state estimation accuracy, we propose an adaptive constrained particle filter via constrained sampling. First, in order to obtain particles drawn from the constrained important density function, we construct and solve a general optimization function theoretically fusing equality constraints and the importance density function. Furthermore, to reduce the computation time caused by the number of particles, the constrained Kullback‐Leiler distance sampling method is given to online adapt the number of particles needed for state estimation. A simulation study in the context of road‐confined vehicle tracking demonstrates that the proposed filter outperforms the typical constrained ones for equality constrained dynamic systems.  相似文献   

5.
针对常规粒子滤波算法使用先验密度函数来采样粒子,从而使粒子分布依赖动态模型来降低估计精度 的问题,以基于观测量相似函数采样的相似采样粒子滤波为基础,提出一种改进的粒子相关性预采样相似采样粒子 滤波算法.在系统测量噪声较小的情况下,利用相似采样获得更加贴近真实后验分布的粒子来提高估计精度;而相 关性预采样则通过计算相邻时刻粒子的转移概率并淘汰概率较低的粒子来提高粒子利用效率,在保证估计精度的同 时显著降低粒子数量需求.设计了算法的重要性密度函数并推导了权值递推公式.通过蒙特卡洛仿真分析了本文提 出的算法.最后通过一个混合坐标系下的目标跟踪实例阐述了算法的应用.  相似文献   

6.
Fast realistic rendering of objects in scattering media is still a challenging topic in computer graphics. In presence of participating media, a light beam is repeatedly scattered by media particles, changing direction and getting spread out. Explicitly evaluating this beam distribution would enable efficient simulation of multiple scattering events without involving costly stochastic methods. Narrow beam theory provides explicit equations that approximate light propagation in a narrow incident beam. Based on this theory, we propose a closed‐form distribution function for scattered beams. We successfully apply it to the image synthesis of scenes in which scattering occurs, and show that our proposed estimation method is more accurate than those based on the Wentzel‐Kramers‐Brillouin (WKB) theory.  相似文献   

7.
We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey‐Greenstein phase function. Instead of a one‐segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4–10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memory‐heavy heterogeneous media.  相似文献   

8.
参与介质在现实世界中广泛存在,光线在参与介质中的传播过程比在表面上的传播过程更加复杂,比如在高度散射参与介质中会发生成千上万次反射、在低散射参与介质中由于表面聚集出现体焦散效果,从而使得光线的模拟过程非常耗时。目前常用的方法包括点、光束和路径统一模型法(unifying points,beams and paths,UPBP)以及流型探索梅特罗波利斯光线传递方法(manifold exploration Metropolis light transport,MEMLT)等,这些方法在一定程度上改进了原有方法,但是在一些特殊情况下仍然需要很长时间才能收敛。本文介绍几种针对均匀参与介质的高效渲染方法。1)基于点的参与介质渲染方法,主要通过在参与介质内分布一些点来分别加速单次、二次和多次散射的计算,在GPU (graphics processing unit)实现的基础上,最终达到可交互的效率,并且支持对任意的均匀参与介质的编辑。2)基于多次反射的预计算模型,预计算出无限参与介质中的多次散射分布,通过分析光照分布的对称性,将该分布的维度从4维减低为3维,并且将该分布应用到多种蒙特卡洛渲染方法中,比如MEMLT、UPBP等,从而提高效率。3)参与介质中的路径指导方法,通过学习光线在参与介质中的分布,该分布用SD-tree (spatial-directional tree)来表示,与相位函数进行重采样来产生出射方向。以上3种方法分别从不同角度加快了参与介质的渲染效率。  相似文献   

9.
We introduce a set of robust importance sampling techniques which allow efficient calculation of direct and indirect lighting from arbitrary light sources in both homogeneous and heterogeneous media. We show how to distribute samples along a ray proportionally to the incoming radiance for point and area lights. In heterogeneous media, we decouple ray marching from light calculations by computing a representation of the transmittance function that can be quickly evaluated during sampling, at the cost of a small amount of bias. This representation also allows the calculation of another probability density function which can direct samples to regions most likely to scatter light. These techniques are orthogonal and can be combined via multiple importance sampling to further reduce variance. Our method has very modest per‐ray memory requirements and does not require any preprocessing, making it simple to integrate into production ray tracing based renderers.  相似文献   

10.
State‐of‐the‐art density estimation methods for rendering participating media rely on a dense photon representation of the radiance distribution within a scene. A critical bottleneck of such kernel‐based approaches is the excessive number of photons that are required in practice to resolve fine illumination details, while controlling the amount of noise. In this paper, we propose a parametric density estimation technique that represents radiance using a hierarchical Gaussian mixture. We efficiently obtain the coefficients of this mixture using a progressive and accelerated form of the Expectation‐Maximization algorithm. After this step, we are able to create noise‐free renderings of high‐frequency illumination using only a few thousand Gaussian terms, where millions of photons are traditionally required. Temporal coherence is trivially supported within this framework, and the compact footprint is also useful in the context of real‐time visualization. We demonstrate a hierarchical ray tracing‐based implementation, as well as a fast splatting approach that can interactively render animated volume caustics.  相似文献   

11.
In this paper, a new particle smoother based on forward filtering backward simulation is developed to solve the nonlinear and non‐Gaussian smoothing problem when measurements are randomly delayed by one sampling time. The heart of the proposed particle smoother is computation of delayed posterior filtering density based on stochastic sampling approach, whose particles and corresponding weights are updated in Bayesian estimation framework by considering the one‐step randomly delayed measurement model. The superior performance of the proposed particle smoother as compared with existing methods is illustrated in a numerical example concerning univariate non‐stationary growth model.  相似文献   

12.
Visualizing dynamic participating media in particle form by fully solving equations from the light transport theory is a computationally very expensive process. In this paper, we present a computational pipeline for particle volume rendering that is easily accelerated by the current GPU. To fully harness its massively parallel computing power, we transform input particles into a volumetric density field using a GPU-assisted, adaptive density estimation technique that iteratively adapts the smoothing length for local grid cells. Then, the volume data is visualized efficiently based on the volume photon mapping method where our GPU techniques further improve the rendering quality offered by previous implementations while performing rendering computation in acceptable time. It is demonstrated that high quality volume renderings can be easily produced from large particle datasets in time frames of a few seconds to less than a minute.  相似文献   

13.
Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.  相似文献   

14.
基于粒子滤波的机动目标跟踪算法仿真研究   总被引:4,自引:0,他引:4  
针对非线性多目标模型,应用粒子滤波算法,这种方法不受模型线性和Gauss假设的约束,是一种处理非线性非高斯动态系统状态递推估计的有效算法。在粒子滤波的基础上融合扩展卡尔曼滤波算法和无迹卡尔曼滤波算法。融合后的新算法在计算提议概率密度分布时,粒子的产生充分考虑当前时刻的量测,使得粒子的分布更加接近状态的后验概率分布,再用平滑算法处理滤波的结果。仿真结果表明,算法有较好的跟踪效果。  相似文献   

15.
In density estimation task, Maximum Entropy (Maxent) model can effectively use reliable prior information via nonparametric constraints, that is, linear constraints without empirical parameters. However, reliable prior information is often insufficient, and parametric constraints becomes necessary but poses considerable implementation complexity. Improper setting of parametric constraints can result in overfitting or underfitting. To alleviate this problem, a generalization of Maxent, under Tsallis entropy framework, is proposed. The proposed method introduces a convex quadratic constraint for the correction of (expected) quadratic Tsallis Entropy Bias (TEB). Specifically, we demonstrate that the expected quadratic Tsallis entropy of sampling distributions is smaller than that of the underlying real distribution with regard to frequentist, Bayesian prior, and Bayesian posterior framework, respectively. This expected entropy reduction is exactly the (expected) TEB, which can be expressed by the closed‐form formula and acts as a consistent and unbiased correction with an appropriate convergence rate. TEB indicates that the entropy of a specific sampling distribution should be increased accordingly. This entails a quantitative reinterpretation of the Maxent principle. By compensating TEB and meanwhile forcing the resulting distribution to be close to the sampling distribution, our generalized quadratic Tsallis Entropy Bias Compensation (TEBC) Maxent can be expected to alleviate the overfitting and underfitting. We also present a connection between TEB and Lidstone estimator. As a result, TEB–Lidstone estimator is developed by analytically identifying the rate of probability correction in Lidstone. Extensive empirical evaluation shows promising performance of both TEBC Maxent and TEB‐Lidstone in comparison with various state‐of‐the‐art density estimation methods.  相似文献   

16.
针对粒子传播过程中因欠缺观测信息而导致退化现象和异常粒子,文中提出一种基于试探采样的自反馈目标跟踪算法。该算法在当前帧完成采样后向前试探采样粒子,并且反馈到当前帧,此举是利用未来帧提前采样形式把观测信息融入到状态转移模型中,从而使概率密度分布逼近真实值。分析上下帧间粒子权值关系,舍弃异常元素,进行不完全重采样,在缓解退化问题同时保持样本集多样性。目标状态估计采用加权-最大后验准则,提高了目标跟踪精确度与稳定性。实验结果表明所提算法提高了状态空间质量,相比其他算法具有更好的跟踪性能。  相似文献   

17.
Efficiently simulating the full range of light effects in arbitrary input scenes that contain participating media is a difficult task. Unified points, beams and paths (UPBP) is an algorithm capable of capturing a wide range of media effects, by combining bidirectional path tracing (BPT) and photon density estimation (PDE) with multiple importance sampling (MIS). A computationally expensive task of UPBP is the MIS weight computation, performed each time a light path is formed. We derive an efficient algorithm to compute the MIS weights for UPBP, which improves over previous work by eliminating the need to iterate over the path vertices. We achieve this by maintaining recursive quantities as subpaths are generated, from which the subpath weights can be computed. In this way, the full path weight can be computed by only using the data cached at the two vertices at the ends of the subpaths. Furthermore, a costly part of PDE is the search for nearby photon points and beams. Previous work has shown that a spatial data structure for photon mapping can be implemented using the hardware-accelerated bounding volume hierarchy of NVIDIA's RTX GPUs. We show that the same technique can be applied to different types of volumetric PDE and compare the performance of these data structures with the state of the art. Finally, using our new algorithm and data structures we fully implement UPBP on the GPU which we, to the best of our knowledge, are the first to do so.  相似文献   

18.
A recent technique that forms virtual ray lights (VRLs) from path segments in media, reduces the artifacts common to VPL approaches in participating media, however, distracting singularities still remain. We present Virtual Beam Lights (VBLs), a progressive many‐lights algorithm for rendering complex indirect transport paths in, from, and to media. VBLs are efficient and can handle heterogeneous media, anisotropic scattering, and moderately glossy surfaces, while provably converging to ground truth. We inflate ray lights into beam lights with finite thicknesses to eliminate the remaining singularities. Furthermore, we devise several practical schemes for importance sampling the various transport contributions between camera rays, light rays, and surface points. VBLs produce artifact‐free images faster than VRLs, especially when glossy surfaces and/or anisotropic phase functions are present. Lastly, we employ a progressive thickness reduction scheme for VBLs in order to render results that converge to ground truth.  相似文献   

19.
张伟  黄卫民 《自动化学报》2022,48(10):2585-2599
在多目标粒子群优化算法中,平衡算法收敛性和多样性是获得良好分布和高精度Pareto前沿的关键,多数已提出的方法仅依靠一种策略引导粒子搜索,在解决复杂问题时算法收敛性和多样性不足.为解决这一问题,提出一种基于种群分区的多策略自适应多目标粒子群优化算法.采用粒子收敛性贡献对算法环境进行检测,自适应调整粒子的探索和开发过程;为准确制定不同性能的粒子的搜索策略,提出一种多策略的全局最优粒子选取方法和多策略的变异方法,根据粒子的收敛性评价指标,将种群划分为3个区域,将粒子性能与算法寻优过程结合,提升种群中各个粒子的搜索效率;为解决因选取的个体最优粒子不能有效指导粒子飞行方向,使算法停滞,陷入局部最优的问题,提出一种带有记忆区间的个体最优粒子选取方法,提升个体最优粒子选取的可靠性并加快粒子收敛过程;采用包含双性能测度的融合指标维护外部存档,避免仅根据粒子密度对外部存档维护时,删除收敛性较好的粒子,导致种群产生退化,影响粒子开发能力.仿真实验结果表明,与其他几种多目标优化算法相比,该算法具有良好的收敛性和多样性.  相似文献   

20.
针对基于概率假设密度算法(Probability hypothesis density,PHD)的非线性多目标跟踪精度低、滤波发散等问题,提出了一种新的PHD算法——改进的均方根嵌入式容积粒子PHD算法(Advanced square-root imbedded cubature particle PHD,ASRICP-PHD).新的算法在初始化采样时将整个采样区域等概率划分为若干个区域,然后利用既定的准则从每个区域抽取粒子,并利用均方根嵌入式容积滤波方法对每个粒子进行滤波,来拟合重要密度函数,预测和更新多目标状态的PHD.仿真结果表明该算法能对多目标进行有效跟踪,相比拟随机采样法和伪随机采样,等概率采样的方法在多目标位置估计和数目估计上有更高的精度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号