共查询到20条相似文献,搜索用时 0 毫秒
1.
We introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one‐degree‐of freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object. 相似文献
2.
P. Vázquez P. Hermosilla V. Guallar J. Estrada A. Vinacua 《Computer Graphics Forum》2018,37(3):391-402
The analysis of protein‐ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein‐ligand interactions of Molecular Simulation trajectories is greatly facilitated. 相似文献
3.
Procedural textile models are compact, easy to edit, and can achieve state‐of‐the‐art realism with fiber‐level details. However, these complex models generally need to be fully instantiated (aka. realized ) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization‐minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber‐level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60–200× less) and achieving good performance. 相似文献
4.
We present a novel example‐based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on‐site appearance acquisition to a lightweight photography process suited for non‐expert users. As our central contribution, we propose a shape‐agnostic BRDF estimation procedure based on binary RGB profile matching. We also model the appearance of materials exhibiting a regular or stationary texture‐like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on‐site shape‐agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible “rapid‐appearance‐modeling”. 相似文献
5.
We present a versatile technique to convert textures with tristimulus colors into the spectral domain, allowing such content to be used in modern rendering systems. Our method is based on the observation that suitable reflectance spectra can be represented using a low‐dimensional parametric model that is intrinsically smooth and energy‐conserving, which leads to significant simplifications compared to prior work. The resulting spectral textures are compact and efficient: storage requirements are identical to standard RGB textures, and as few as six floating point instructions are required to evaluate them at any wavelength. Our model is the first spectral upsampling method to achieve zero error on the full sRGB gamut. The technique also supports large‐gamut color spaces, and can be vectorized effectively for use in rendering systems that handle many wavelengths at once. 相似文献
6.
Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real‐time computer graphics. We present a high‐quality real‐time method for rendering iridescent effects under image‐based lighting. Previous methods model dielectric thin‐films of varying thickness on top of an arbitrary micro‐facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real‐time image‐based lighting. This leads to bright halos at grazing angles and over‐saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance. 相似文献
7.
We introduce Flexible Live‐Wire, a generalization of the Live‐Wire interactive segmentation technique with floating anchors. In our approach, the user input for Live‐Wire is no longer limited to the setting of pixel‐level anchor nodes, but can use more general anchor sets. These sets can be of any dimension, size, or connectedness. The generality of the approach allows the design of a number of user interactions while providing the same functionality as the traditional Live‐Wire. In particular, we experiment with this new flexibility by designing four novel Live‐Wire interactions based on specific primitives: paint, pinch, probable, and pick anchors. These interactions are only a subset of the possibilities enabled by our generalization. Moreover, we discuss the computational aspects of this approach and provide practical solutions to alleviate any additional overhead. Finally, we illustrate our approach and new interactions through several example segmentations. 相似文献
8.
Valentin Deschaintre Miika Aittala Fredo Durand George Drettakis Adrien Bousseau 《Computer Graphics Forum》2019,38(4):1-13
Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches. 相似文献
9.
We introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out‐of‐core representation, based on per‐frame levels of hierarchically tiled non‐redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low‐bitrate codec able to store into fixed‐size pages a variable‐rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time‐critical operations, while a near‐lossless representation is employed to support high‐quality static frame rendering. A flexible high‐speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object‐space and image‐space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high‐quality snapshots generated from near‐lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi‐billion‐voxel frames. 相似文献
10.
In this paper, we propose a PatchMatch‐based Multi‐View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch‐based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth‐normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state‐of‐the‐art. 相似文献
11.
The game and movie industries always face the challenge of reproducing materials. This problem is tackled by combining illumination models and various textures (painted or procedural patterns). Generating stochastic wall patterns is crucial in the creation of a wide range of backgrounds (castles, temples, ruins…). A specific Wang tile set was introduced previously to tackle this problem, in an iterative fashion. However, long lines may appear as visual artifacts. We use this tile set in a new on‐the‐fly procedure to generate stochastic wall patterns. For this purpose, we introduce specific hash functions implementing a constrained Wang tiling. This technique makes possible the generation of boundless textures while giving control over the maximum line length. The algorithm is simple and easy to implement, and the wall structure we get from the tiles allows to achieve visuals that reproduce all the small details of artist painted walls. 相似文献
12.
Reproducing the appearance of real‐world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out‐of‐gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two‐step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low‐dimensional intuitive appearance space recently proposed by Serrano et al. [ SGM*16 ], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image‐based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute‐preserving material editing. 相似文献
13.
Jinho Choi Sanghun Jung Deok Gun Park Jaegul Choo Niklas Elmqvist 《Computer Graphics Forum》2019,38(3):249-260
The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users. 相似文献
14.
Rendering materials such as metallic paints, scratched metals and rough plastics requires glint integrators that can capture all micro‐specular highlights falling into a pixel footprint, faithfully replicating surface appearance. Specular normal maps can be used to represent a wide range of arbitrary micro‐structures. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back‐facing normals and importance sampling is suboptimal, especially when the micro‐surface is very rough. We propose a new glint integrator relying on a multiple‐scattering patch‐based BRDF addressing these issues. To do so, our method uses a modified version of microfacet‐based normal mapping [SHHD17] designed for glint rendering, leveraging symmetric microfacets. To model multiple‐scattering, we re‐introduce the lost energy caused by a perfectly specular, single‐scattering formulation instead of using expensive random walks. This reflectance model is the basis of our patch‐based BRDF, enabling robust sampling and artifact‐free rendering with a natural appearance. Additional calculation costs amount to about 40% in the worst cases compared to previous methods [YHMR16, CCM18]. 相似文献
15.
Indirect illumination involving with visually rich participating media such as turbulent smoke and loud explosions contributes significantly to the appearances of other objects in a rendering scene. However, previous real‐time techniques have focused only on the appearances of the media directly visible from the viewer. Specifically, appearances that can be indirectly seen over reflective surfaces have not attracted much attention. In this paper, we present a real‐time rendering technique for such indirect views that involves the participating media. To achieve real‐time performance for computing indirect views, we leverage layered polygonal area lights (LPALs) that can be obtained by slicing the media into multiple flat layers. Using this representation, radiance entering each surface point from each slice of the volume is analytically evaluated to achieve instant calculation. The analytic solution can be derived for standard bidirectional reflectance distribution functions (BRDFs) based on the microfacet theory. Accordingly, our method is sufficiently robust to work on surfaces with arbitrary shapes and roughness values. In addition, we propose a quadrature method for more accurate rendering of scenes with dense volumes, and a transformation of the domain of volumes to simplify the calculation and implementation of the proposed method. By taking advantage of these computation techniques, the proposed method achieves real‐time rendering of indirect illumination for emissive volumes. 相似文献
16.
G. Cordonnier P. Ecormier E. Galin J. Gain B. Benes M.‐P. Cani 《Computer Graphics Forum》2018,37(2):497-509
We introduce a novel method for interactive generation of visually consistent, snow‐covered landscapes and provide control of their dynamic evolution over time. Our main contribution is the real‐time phenomenological simulation of avalanches and other user‐guided events, such as tracks left by Nordic skiing, which can be applied to interactively sculpt the landscape. The terrain is modeled as a height field with additional layers for stable, compacted, unstable, and powdery snow, which behave in combination as a semi‐viscous fluid. We incorporate the impact of several phenomena, including sunlight, temperature, prevailing wind direction, and skiing activities. The snow evolution includes snow‐melt and snow‐drift, which affect stability of the snow mass and the probability of avalanches. A user can shape landscapes and their evolution either with a variety of interactive brushes, or by prescribing events along a winter season time‐line. Our optimized GPU‐implementation allows interactive updates of snow type and depth across a large (10 × 10 km) terrain, including real‐time avalanches, making this suitable for visual assets in computer games. We evaluate our method through perceptual comparison against exiting methods and real snow‐depth data. 相似文献
17.
We introduce a bidirectional reflectance distribution function (BRDF) model for the rendering of materials that exhibit hazy reflections, whereby the specular reflections appear to be flanked by a surrounding halo. The focus of this work is on artistic control and ease of implementation for real‐time and off‐line rendering. We propose relying on a composite material based on a pair of arbitrary BRDF models; however, instead of controlling their physical parameters, we expose perceptual parameters inspired by visual experiments [ VBF17 ]. Our main contribution then consists in a mapping from perceptual to physical parameters that ensures the resulting composite BRDF is valid in terms of reciprocity, positivity and energy conservation. The immediate benefit of our approach is to provide direct artistic control over both the intensity and extent of the haze effect, which is not only necessary for editing purposes, but also essential to vary haziness spatially over an object surface. Our solution is also simple to implement as it requires no new importance sampling strategy and relies on existing BRDF models. Such a simplicity is key to approximating the method for the editing of hazy gloss in real‐time and for compositing. 相似文献
18.
C. Stoiber A. Rind F. Grassinger R. Gutounig E. Goldgruber M. Sedlmair . Emrich W. Aigner 《Computer Graphics Forum》2019,38(3):699-711
Journalists need visual interfaces that cater to the exploratory nature of their investigative activities. In this paper, we report on a four‐year design study with data journalists. The main result is netflower, a visual exploration tool that supports journalists in investigating quantitative flows in dynamic network data for story‐finding. The visual metaphor is based on Sankey diagrams and has been extended to make it capable of processing large amounts of input data as well as network change over time. We followed a structured, iterative design process including requirement analysis and multiple design and prototyping iterations in close cooperation with journalists. To validate our concept and prototype, a workshop series and two diary studies were conducted with journalists. Our findings indicate that the prototype can be picked up quickly by journalists and valuable insights can be achieved in a few hours. The prototype can be accessed at: http://netflower.fhstp.ac.at/ 相似文献
19.
This paper proposes a deep learning‐based image tone enhancement approach that can maximally enhance the tone of an image while preserving the naturalness. Our approach does not require carefully generated ground‐truth images by human experts for training. Instead, we train a deep neural network to mimic the behavior of a previous classical filtering method that produces drastic but possibly unnatural‐looking tone enhancement results. To preserve the naturalness, we adopt the generative adversarial network (GAN) framework as a regularizer for the naturalness. To suppress artifacts caused by the generative nature of the GAN framework, we also propose an imbalanced cycle‐consistency loss. Experimental results show that our approach can effectively enhance the tone and contrast of an image while preserving the naturalness compared to previous state‐of‐the‐art approaches. 相似文献
20.
Color scribbling is a unique form of illustration where artists use compact, overlapping, and monochromatic scribbles at microscopic scale to create astonishing colorful images at macroscopic scale. The creation process is skill‐demanded and time‐consuming, which typically involves drawing monochromatic scribbles layer‐by‐layer to depict true‐color subjects using a limited color palette delicately. In this work, we present a novel computational framework for automatic generation of color scribble images from arbitrary raster images. The core contribution of our work lies in a novel color dithering model tailor‐made for synthesizing a smooth color appearance using multiple layers of overlapped monochromatic strokes. Specifically, our system reconstructs the appearance of the input image by (i) generating layers of monochromatic scribbles based on a limited color palette derived from input image, and (ii) optimizing the drawing sequence among layers to minimize the visual color dissimilarity between dithered image and original image as well as the color banding artifacts. We demonstrate the effectiveness and robustness of our algorithm with various convincing results synthesized from a variety of input images with different stroke patterns. The experimental study further shows that our approach faithfully captures the scribble style and the color presentation at respectively microscopic and macroscopic scales, which is otherwise difficult for state‐of‐the‐art methods. 相似文献