首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 172 毫秒
1.
We address the issue of illumination and acceleration in special relativistic visualization. Betts (J. Visual. Comput. Animat. 1998; 9: 17–31) presents an incorrect derivation of the transformation of the Rayleigh–Jeans radiation, which we compare to the correct transformation of radiance in the framework of special relativity. His rendering algorithm can be modified to correctly account for the relativistic effects on illumination. Furthermore, we show how acceleration can be included in special relativistic visualization by calculating the trajectory of accelerating objects, which is a prerequisite for a physically based camera model. Therefore interaction and animation in special relativistic visualization are possible. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
We propose an efficient approach for interactive visualization of massive models with CPU ray tracing. A voxel‐based hierarchical level‐of‐detail (LOD) framework is employed to minimize rendering time and required system memory. In a pre‐processing phase, a compressed out‐of‐core data structure is constructed, which contains the original primitives of the model and the LOD voxels, organized into a kd‐tree. During rendering, data is loaded asynchronously to ensure a smooth inspection of the model regardless of the available I/O bandwidth. With our technique, we are able to explore data sets consisting of hundreds of millions of triangles in real‐time on a desktop PC with a quad‐core CPU.  相似文献   

3.
Computer graphics artists often resort to compositing to rework light effects in a synthetic image without requiring a new render. Shadows are primary subjects of artistic manipulation as they carry important stylistic information while our perception is tolerant with their editing. In this paper we formalize the notion of global shadow, generalizing direct shadow found in previous work to a global illumination context. We define an object's shadow layer as the difference between two altered renders of the scene. A shadow layer contains the radiance lost on the camera film because of a given object. We translate this definition in the theoretical framework of Monte‐Carlo integration, obtaining a concise expression of the shadow layer. Building on it, we propose a path tracing algorithm that renders both the original image and any number of shadow layers in a single pass: the user may choose to separate shadows on a per‐object and per‐light basis, enabling intuitive and decoupled edits.  相似文献   

4.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

5.
Computation of illumination with soft‐shadows from all‐frequency environment maps, is a computationally expensive process. Use of pre‐computation add the limitation that receiver's geometry must be known in advance, since Irradiance computation takes into account the receiver's normal direction. We propose a method that using a new notion that we introduce, the Fullsphere Irradiance, allows us to accumulate the contribution from all light sources in the scene, on a possible receiver without knowing the receiver's geometry. This expensive computation is done in a pre‐processing step. The pre‐computed value is used at run time to compute the Irradiance arriving at any receiver with known direction. We show how using this technique we compute soft‐shadows and self‐shadows in real‐time from all‐frequency environments, with only modest memory requirements. A GPU implementation of the method, yields high frame rates even for complex scenes with dozens of dynamic occluders and receivers.  相似文献   

6.
We propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB‐D camera, without the need of a multiple‐camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB‐D image sequence, and geometric warping fields are found using a state‐of‐the‐art non‐rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi‐scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters.  相似文献   

7.
We propose to enhance the capabilities of the human visual system by performing optical image processing directly on an observed scene. Unlike previous work which additively superimposes imagery on a scene, or completely replaces scene imagery with a manipulated version, we perform all manipulation through the use of a light modulation display to spatially filter incoming light. We demonstrate a number of perceptually motivated algorithms including contrast enhancement and reduction, object highlighting for preattentive emphasis, colour saturation, de‐saturation and de‐metamerization, as well as visual enhancement for the colour blind. A camera observing the scene guides the algorithms for on‐the‐fly processing, enabling dynamic application scenarios such as monocular scopes, eyeglasses and windshields.  相似文献   

8.
Physically based rendering is a well‐understood technique to produce realistic‐looking images. However, different algorithms exist for efficiency reasons, which work well in certain cases but fail or produce rendering artefacts in others. Few tools allow a user to gain insight into the algorithmic processes. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering. It consists of an interactive parallel coordinates plot, with a built‐in sampling‐based data reduction technique to visualize the attributes associated with each light sample. Two‐dimensional (2D) and three‐dimensional (3D) heat maps depict any desired property of the rendering process. An interactively rendered 3D view of the scene displays animated light paths based on the user's selection to gain further insight into the rendering process. The provided interactivity enables the user to guide the rendering process for more efficiency. To show its usefulness, we present several applications based on our tool. This includes differential light transport visualization to optimize light setup in a scene, finding the causes of and resolving rendering artefacts, such as fireflies, as well as a path length contribution histogram to evaluate the efficiency of different Monte Carlo estimators.  相似文献   

9.
Electroencephalography (EEG) coherence networks represent functional brain connectivity, and are constructed by calculating the coherence between pairs of electrode signals as a function of frequency. Visualization of such networks can provide insight into unexpected patterns of cognitive processing and help neuroscientists to understand brain mechanisms. However, visualizing dynamic EEG coherence networks is a challenge for the analysis of brain connectivity, especially when the spatial structure of the network needs to be taken into account. In this paper, we present a design and implementation of a visualization framework for such dynamic networks. First, requirements for supporting typical tasks in the context of dynamic functional connectivity network analysis were collected from neuroscience researchers. In our design, we consider groups of network nodes and their corresponding spatial location for visualizing the evolution of the dynamic coherence network. We introduce an augmented timeline‐based representation to provide an overview of the evolution of functional units (FUs) and their spatial location over time. This representation can help the viewer to identify relations between functional connectivity and brain regions, as well as to identify persistent or transient functional connectivity patterns across the whole time window. In addition, we introduce the time‐annotated FU map representation to facilitate comparison of the behaviour of nodes between consecutive FU maps. A colour coding is designed that helps to distinguish distinct dynamic FUs. Our implementation also supports interactive exploration. The usefulness of our visualization design was evaluated by an informal user study. The feedback we received shows that our design supports exploratory analysis tasks well. The method can serve as a first step before a complete analysis of dynamic EEG coherence networks.  相似文献   

10.
We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.  相似文献   

11.
This paper proposes a visual representation named scene tunnel for capturing urban scenes along routes and visualizing them on the Internet. We scan scenes with multiple cameras or a fish-eye camera on a moving vehicle, which generates a real scene archive along streets that is more complete than previously proposed route panoramas. Using a translating spherical eye, properly set planes of scanning, and unique parallel-central projection, we explore the image acquisition of the scene tunnel from camera selection and alignment, slit calculation, scene scanning, to image integration. The scene tunnels cover high buildings, ground, and various viewing directions and have uniformed resolutions along the street. The sequentially organized scene tunnel benefits texture mapping onto the urban models. We analyze the shape characteristics in the scene tunnels for designing visualization algorithms. After combining this with a global panorama and forward image caps, the capped scene tunnels can provide continuous views directly for virtual or real navigation in a city. We render scene tunnel dynamically by view warping, fast transmission, and flexible interaction. The compact and continuous scene tunnel facilitates model construction, data streaming, and seamless route traversing on the Internet and mobile devices.  相似文献   

12.
We present an image‐based rendering system to viewpoint‐navigate through space and time of complex real‐world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multivideo footage as input. Inexpensive, consumer‐grade camcorders suffice to acquire arbitrary scenes, for example in the outdoors, without elaborate recording setup procedures, allowing also for hand‐held recordings. Instead of scene depth estimation, layer segmentation or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion or freeze‐and‐rotate effects can all be created in the same way. Acquisition simplification, integration of moving cameras, generalization to difficult scenes and space–time symmetric interpolation amount to a widely applicable virtual video camera system.  相似文献   

13.
Today's large data centres are the computational hubs of the next generation of IT services. With the advent of dynamic smart cooling and rack level sensing, the need for visual data exploration is growing. If administrators know the rack level thermal state changes and catch problems in real time, energy consumption can be greatly reduced. In this paper, we apply a cell‐based spatio‐temporal overall view with high‐resolution time series to simultaneously analyze complex thermal state changes over time across hundreds of racks. We employ cell‐based visualization techniques for trouble shooting and abnormal state detection. These techniques are based on the detection of sensor temperature relations and events to help identify the root causes of problems. In order to optimize the data centre cooling system performance, we derive new non‐overlapped scatter plots to visualize the correlations between the temperatures and chiller utilization. All these techniques have been used successfully to monitor various time‐critical thermal states in real‐world large‐scale production data centres and to derive cooling policies. We are starting to embed these visualization techniques into a handheld device to add mobile monitoring capability.  相似文献   

14.
Power saving is a prevailing concern in desktop computers and, especially, in battery‐powered devices such as mobile phones. This is generating a growing demand for power‐aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real‐time power‐efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera‐view space, nor Pareto curves to explore the vast power‐error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.  相似文献   

15.
16.
In previous work, we proposed a technique for preserving the privacy of quasi‐identifiers in sensitive data when visualized using parallel coordinates. This paper builds on that work by introducing a number of metrics that can be used to assess both the level of privacy and the amount of utility that can be gained from the resulting visualizations. We also generalize our approach beyond parallel coordinates to scatter plots and other visualization techniques. Privacy preservation generally entails a trade‐off between privacy and utility: the more the data are protected, the less useful the visualization. Using a visually‐oriented approach, we can provide a higher amount of utility than directly applying data anonymization techniques used in data mining. To demonstrate this, we use the visual uncertainty framework for systematically defining metrics based on cluster artifacts and information theoretic principles. In a case study, we demonstrate the effectiveness of our technique as compared to standard data‐based clustering in the context of privacy‐preserving visualization.  相似文献   

17.
A camera's shutter controls the incoming light that is reaching the camera sensor. Different shutters lead to wildly different results, and are often used as a tool in movies for artistic purpose, e.g., they can indirectly control the effect of motion blur. However, a physical camera is limited to a single shutter setting at any given moment. ShutterApp enables users to define spatio‐temporally‐varying virtual shutters that go beyond the options available in real‐world camera systems. A user provides a sparse set of annotations that define shutter functions at selected locations in key frames. From this input, our solution defines shutter functions for each pixel of the video sequence using a suitable interpolation technique, which are then employed to derive the output video. Our solution performs in real‐time on commodity hardware. Hereby, users can explore different options interactively, leading to a new level of expressiveness without having to rely on specialized hardware or laborious editing.  相似文献   

18.
We present a real‐time algorithm for rendering translucent objects of arbitrary shapes. We approximate the scattering of light inside the objects using the diffusion equation, which we solve on‐the‐fly using the GPU. Our algorithm is general enough to handle arbitrary geometry, heterogeneous materials, deformable objects and modifications of lighting, all in real‐time. In a pre‐processing step, we discretize the object into a regular 4‐connected structure (QuadGraph). Due to its regular connectivity, this structure is easily packed into a texture and stored on the GPU. At runtime, we use the QuadGraph stored on the GPU to solve the diffusion equation, in real‐time, taking into account the varying input conditions: Incoming light, object material and geometry. We handle deformable objects, provided the deformation does not change the topological structure of the objects.  相似文献   

19.
We present an optimized pruning algorithm that allows for considerable geometry reduction in large botanical scenes while maintaining high and coherent rendering quality. We improve upon previous techniques by applying model‐specific geometry reduction functions and optimized scaling functions. For this we introduce the use of Precision and Recall (PR) as a measure of quality to rendering and show how PR‐scores can be used to predict better scaling values. We conducted a user‐study letting subjects adjust the scaling value, which shows that the predicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritization for pruning to account for view‐optimized geometry selection, which allows to take global scene information, such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands of complex tree models in real‐time.  相似文献   

20.
We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号