首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The analysis of protein‐ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein‐ligand interactions of Molecular Simulation trajectories is greatly facilitated.  相似文献   

2.
We introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one‐degree‐of freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object.  相似文献   

3.
We present a novel example‐based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on‐site appearance acquisition to a lightweight photography process suited for non‐expert users. As our central contribution, we propose a shape‐agnostic BRDF estimation procedure based on binary RGB profile matching. We also model the appearance of materials exhibiting a regular or stationary texture‐like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on‐site shape‐agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible “rapid‐appearance‐modeling”.  相似文献   

4.
An appearance model for materials adhered with massive collections of special effect pigments has to take both high‐frequency spatial details (e.g., glints) and wave‐optical effects (e.g., iridescence) due to thin‐film interference into account. However, either phenomenon is challenging to characterize and simulate in a physically accurate way. Capturing these fascinating effects in a unified framework is even harder as the normal distribution function and the reflectance term are highly correlated and cannot be treated separately. In this paper, we propose a multi‐scale BRDF model for reproducing the main visual effects generated by the discrete assembly of special effect pigments, enabling a smooth transition from fine‐scale surface details to large‐scale iridescent patterns. We demonstrate that the wavelength‐dependent reflectance inside the pixel's footprint follows a Gaussian distribution according to the central limit theorem, and is closely related to the distribution of the thin‐film's thickness. We efficiently determine the mean and the variance of this Gaussian distribution for each pixel whose closed‐form expressions can be derived by assuming that the thin‐film's thickness is uniformly distributed. To validate its effectiveness, the proposed model is compared against some previous methods and photographs of actual materials. Furthermore, since our method does not require any scene‐dependent precomputation, the distribution of thickness is allowed to be spatially‐varying.  相似文献   

5.
Power saving is a prevailing concern in desktop computers and, especially, in battery‐powered devices such as mobile phones. This is generating a growing demand for power‐aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real‐time power‐efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera‐view space, nor Pareto curves to explore the vast power‐error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.  相似文献   

6.
Procedural textile models are compact, easy to edit, and can achieve state‐of‐the‐art realism with fiber‐level details. However, these complex models generally need to be fully instantiated (aka. realized ) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization‐minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber‐level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60–200× less) and achieving good performance.  相似文献   

7.
We present a versatile technique to convert textures with tristimulus colors into the spectral domain, allowing such content to be used in modern rendering systems. Our method is based on the observation that suitable reflectance spectra can be represented using a low‐dimensional parametric model that is intrinsically smooth and energy‐conserving, which leads to significant simplifications compared to prior work. The resulting spectral textures are compact and efficient: storage requirements are identical to standard RGB textures, and as few as six floating point instructions are required to evaluate them at any wavelength. Our model is the first spectral upsampling method to achieve zero error on the full sRGB gamut. The technique also supports large‐gamut color spaces, and can be vectorized effectively for use in rendering systems that handle many wavelengths at once.  相似文献   

8.
Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real‐time computer graphics. We present a high‐quality real‐time method for rendering iridescent effects under image‐based lighting. Previous methods model dielectric thin‐films of varying thickness on top of an arbitrary micro‐facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real‐time image‐based lighting. This leads to bright halos at grazing angles and over‐saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.  相似文献   

9.
In this work, we introduce multi‐column graph convolutional networks (MGCNs), a deep generative model for 3D mesh surfaces that effectively learns a non‐linear facial representation. We perform spectral decomposition of meshes and apply convolutions directly in the frequency domain. Our network architecture involves multiple columns of graph convolutional networks (GCNs), namely large GCN (L‐GCN), medium GCN (M‐GCN) and small GCN (S‐GCN), with different filter sizes to extract features at different scales. L‐GCN is more useful to extract large‐scale features, whereas S‐GCN is effective for extracting subtle and fine‐grained features, and M‐GCN captures information in between. Therefore, to obtain a high‐quality representation, we propose a selective fusion method that adaptively integrates these three kinds of information. Spatially non‐local relationships are also exploited through a self‐attention mechanism to further improve the representation ability in the latent vector space. Through extensive experiments, we demonstrate the superiority of our end‐to‐end framework in improving the accuracy of 3D face reconstruction. Moreover, with the help of variational inference, our model has excellent generating ability.  相似文献   

10.
We present a novel method to compute bijective PolyCube‐maps with low isometric distortion. Given a surface and its pre‐axis‐aligned shape that is not an exact PolyCube shape, the algorithm contains two steps: (i) construct a PolyCube shape to approximate the pre‐axis‐aligned shape; and (ii) generate a bijective, low isometric distortion mapping between the constructed PolyCube shape and the input surface. The PolyCube construction is formulated as a constrained optimization problem, where the objective is the number of corners in the constructed PolyCube, and the constraint is to bound the approximation error between the constructed PolyCube and the input pre‐axis‐aligned shape while ensuring topological validity. A novel erasing‐and‐filling solver is proposed to solve this challenging problem. Centeral to the algorithm for computing bijective PolyCube‐maps is a quad mesh optimization process that projects the constructed PolyCube onto the input surface with high‐quality quads. We demonstrate the efficacy of our algorithm on a data set containing 300 closed meshes. Compared to state‐of‐the‐art methods, our method achieves higher practical robustness and lower mapping distortion.  相似文献   

11.
The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users.  相似文献   

12.
We introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out‐of‐core representation, based on per‐frame levels of hierarchically tiled non‐redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low‐bitrate codec able to store into fixed‐size pages a variable‐rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time‐critical operations, while a near‐lossless representation is employed to support high‐quality static frame rendering. A flexible high‐speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object‐space and image‐space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high‐quality snapshots generated from near‐lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi‐billion‐voxel frames.  相似文献   

13.
Reproducing the appearance of real‐world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out‐of‐gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two‐step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low‐dimensional intuitive appearance space recently proposed by Serrano et al. [ SGM*16 ], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image‐based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute‐preserving material editing.  相似文献   

14.
We introduce Flexible Live‐Wire, a generalization of the Live‐Wire interactive segmentation technique with floating anchors. In our approach, the user input for Live‐Wire is no longer limited to the setting of pixel‐level anchor nodes, but can use more general anchor sets. These sets can be of any dimension, size, or connectedness. The generality of the approach allows the design of a number of user interactions while providing the same functionality as the traditional Live‐Wire. In particular, we experiment with this new flexibility by designing four novel Live‐Wire interactions based on specific primitives: paint, pinch, probable, and pick anchors. These interactions are only a subset of the possibilities enabled by our generalization. Moreover, we discuss the computational aspects of this approach and provide practical solutions to alleviate any additional overhead. Finally, we illustrate our approach and new interactions through several example segmentations.  相似文献   

15.
Rendering materials such as metallic paints, scratched metals and rough plastics requires glint integrators that can capture all micro‐specular highlights falling into a pixel footprint, faithfully replicating surface appearance. Specular normal maps can be used to represent a wide range of arbitrary micro‐structures. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back‐facing normals and importance sampling is suboptimal, especially when the micro‐surface is very rough. We propose a new glint integrator relying on a multiple‐scattering patch‐based BRDF addressing these issues. To do so, our method uses a modified version of microfacet‐based normal mapping [SHHD17] designed for glint rendering, leveraging symmetric microfacets. To model multiple‐scattering, we re‐introduce the lost energy caused by a perfectly specular, single‐scattering formulation instead of using expensive random walks. This reflectance model is the basis of our patch‐based BRDF, enabling robust sampling and artifact‐free rendering with a natural appearance. Additional calculation costs amount to about 40% in the worst cases compared to previous methods [YHMR16, CCM18].  相似文献   

16.
The game and movie industries always face the challenge of reproducing materials. This problem is tackled by combining illumination models and various textures (painted or procedural patterns). Generating stochastic wall patterns is crucial in the creation of a wide range of backgrounds (castles, temples, ruins…). A specific Wang tile set was introduced previously to tackle this problem, in an iterative fashion. However, long lines may appear as visual artifacts. We use this tile set in a new on‐the‐fly procedure to generate stochastic wall patterns. For this purpose, we introduce specific hash functions implementing a constrained Wang tiling. This technique makes possible the generation of boundless textures while giving control over the maximum line length. The algorithm is simple and easy to implement, and the wall structure we get from the tiles allows to achieve visuals that reproduce all the small details of artist painted walls.  相似文献   

17.
Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.  相似文献   

18.
Indirect illumination involving with visually rich participating media such as turbulent smoke and loud explosions contributes significantly to the appearances of other objects in a rendering scene. However, previous real‐time techniques have focused only on the appearances of the media directly visible from the viewer. Specifically, appearances that can be indirectly seen over reflective surfaces have not attracted much attention. In this paper, we present a real‐time rendering technique for such indirect views that involves the participating media. To achieve real‐time performance for computing indirect views, we leverage layered polygonal area lights (LPALs) that can be obtained by slicing the media into multiple flat layers. Using this representation, radiance entering each surface point from each slice of the volume is analytically evaluated to achieve instant calculation. The analytic solution can be derived for standard bidirectional reflectance distribution functions (BRDFs) based on the microfacet theory. Accordingly, our method is sufficiently robust to work on surfaces with arbitrary shapes and roughness values. In addition, we propose a quadrature method for more accurate rendering of scenes with dense volumes, and a transformation of the domain of volumes to simplify the calculation and implementation of the proposed method. By taking advantage of these computation techniques, the proposed method achieves real‐time rendering of indirect illumination for emissive volumes.  相似文献   

19.
In this paper, we propose a PatchMatch‐based Multi‐View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch‐based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth‐normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state‐of‐the‐art.  相似文献   

20.
Journalists need visual interfaces that cater to the exploratory nature of their investigative activities. In this paper, we report on a four‐year design study with data journalists. The main result is netflower, a visual exploration tool that supports journalists in investigating quantitative flows in dynamic network data for story‐finding. The visual metaphor is based on Sankey diagrams and has been extended to make it capable of processing large amounts of input data as well as network change over time. We followed a structured, iterative design process including requirement analysis and multiple design and prototyping iterations in close cooperation with journalists. To validate our concept and prototype, a workshop series and two diary studies were conducted with journalists. Our findings indicate that the prototype can be picked up quickly by journalists and valuable insights can be achieved in a few hours. The prototype can be accessed at: http://netflower.fhstp.ac.at/  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号