首页 | 本学科首页   官方微博 | 高级检索  
     


Wavefront raycasting using larger filter kernels for on-the-fly GPU gradient reconstruction
Authors:Baoquan Liu  Gordon J. Clapworthy  Feng Dong
Affiliation:1. Department of Computer Science and Technology, University of Bedfordshire, Luton, UK
Abstract:The quality of images generated by volume rendering strongly depends on the accuracy of gradient estimation. However, the most commonly used techniques for on-the-fly gradient reconstruction are still very simple, such as central differences; they generally gather only limited neighbourhood information and thus ultimately produce rather poor quality images. While there are many higher-order reconstruction methods, such as 3×3×3 or 5×5×5 filters, which can improve the quality, their excessive sampling costs have meant that they are generally used only for pre-computed gradients, which are then quantized and stored for later runtime re-interpolation. This may introduce further errors and, significantly, may consume valuable texture memory. In this paper, we address these issues by proposing a CUDA-based rendering framework that uses larger filter kernels for on-the-fly gradient computation in real-time raycasting applications. By using adaptive wavefront tracing, our approach can dramatically reduce the memory bandwidth requirements related to larger neighbour samples. To further ensure that samples are consumed wisely, we have devised a novel adaptive sampling scheme and a customized 3D mipmapping technique in the CUDA environment to sample at a proper level of detail as the ray recedes into the distance. We compared our technique with two previous state-of-the-art GPU raycasting algorithms and found that it achieves higher quality imaging and faster rendering performance across a variety of data sets than the previous methods.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号