首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
QR code is a 2D matrix barcode widely used for product tracking, identification, document management and general marketing. Recently, there have been various attempts to utilize QR codes in 3D manufacturing by carving QR codes on the surface of the printed 3D shape. Nevertheless, significant shape editing and modulation may be required to allow readability of the embedded 3D-QR-codes with good decoding accuracy. In this paper, we introduce a novel QR code 3D fabrication framework aimed at unobtrusive embedding of 3D-QR-codes in the shape hence introducing minimal shape modulation. Essentially, our method computes bi-directional carvings in the 3D shape surface to obtain the black-and-white QR pattern. By using a directional light source, the black-and-white QR pattern emerges as lighted and shadow casted blocks on the shape respectively. To account for minimal modulation and elusiveness, we optimize the QR code carving w.r.t. shape geometry, visual disparity and light source position. Our technique employs a simulation of lighting phenomena through carved modules on the shape to ensure adequate contrast of the printed 3D-QR-code.  相似文献   

2.
We present a reconstruction framework, which fits physically‐based constraints to model large‐scale cloud scenes from satellite images. Applications include weather phenomena visualization, flight simulation, and weather spotter training. In our method, the cloud shape is assumed to be composed of a cloud top surface and a nearly flat cloud base surface. Based on this, an effective method of multi‐spectral data processing is developed to obtain relevant information for calculating the cloud base height and the cloud top height, including ground temperature, cloud top temperature and cloud shadow. A lapse rate model is proposed to formulate cloud shape as an implicit function of temperature lapse rate and cloud base temperature. After obtaining initial cloud shapes, we enrich the shapes by a fractal method and represent reconstructed clouds by a particle system. Experiment results demonstrate the capability of our method in generating physically sound large‐scale cloud scenes from high‐resolution satellite images.  相似文献   

3.
Stroke correspondence construction is a precondition for vectorized 2D animation inbetweening and remains a challenging problem. This paper introduces the FTP‐SC, a fuzzy topology preserving stroke correspondence technique, which is accurate and provides the user more effective control on the correspondence result than previous matching approaches. The method employs a two‐stage scheme to progressively establish the stroke correspondence construction between the keyframes. In the first stage, the stroke correspondences with high confidence are constructed by enforcing the preservation of the so‐called “fuzzy topology” which encodes intrinsic connectivity among the neighboring strokes. Starting with the high‐confidence correspondences, the second stage performs a greedy matching algorithm to generate a full correspondence between the strokes. Experimental results show that the FTP‐SC outperforms the existing approaches and can establish the stroke correspondence with a reasonable amount of user interaction even for keyframes with large geometric and spatial variations between strokes.  相似文献   

4.
We construct a family of barycentric coordinates for 2D shapes including non‐convex shapes, shapes with boundaries, and skeletons. Furthermore, we extend these coordinates to 3D and arbitrary dimension. Our approach modifies the construction of the Floater‐Hormann‐Kós family of barycentric coordinates for 2D convex shapes. We show why such coordinates are restricted to convex shapes and show how to modify these coordinates to extend to discrete manifolds of co‐dimension 1 whose boundaries are composed of simplicial facets. Our coordinates are well‐defined everywhere (no poles) and easy to evaluate. While our construction is widely applicable to many domains, we show several examples related to image and mesh deformation.  相似文献   

5.
Efficiently and accurately rendering hair accounting for multiple scattering is a challenging open problem. Path tracing in hair takes long to converge while other techniques are either too approximate while still being computationally expensive or make assumptions about the scene. We present a technique to infer the higher order scattering in hair in constant time within the path tracing framework, while achieving better computational efficiency. Our method makes no assumptions about the scene and provides control over the renderer's bias & speedup. We achieve this by training a small multilayer perceptron (MLP) to learn the higher-order radiance online, while rendering progresses. We describe how to robustly train this network and thoroughly analyze our resulting renderer's characteristics. We evaluate our method on various hairstyles and lighting conditions. We also compare our method against a recent learning based & a traditional real-time hair rendering method and demonstrate better quantitative & qualitative results. Our method achieves a significant improvement in speed with respect to path tracing, achieving a run-time reduction of 40%-70% while only introducing a small amount of bias.  相似文献   

6.
Normal vectors are essential for many point cloud operations, including segmentation, reconstruction and rendering. The robust estimation of normal vectors from 3D range scans is a challenging task due to undersampling and noise, specially when combining points sampled from multiple sensor locations. Our error model assumes a Gaussian distribution of the range error with spatially‐varying variances that depend on sensor distance and reflected intensity, mimicking the features of Lidar equipment. In this paper we study the impact of measurement errors on the covariance matrices of point neighborhoods. We show that covariance matrices of the true surface points can be estimated from those of the acquired points plus sensor‐dependent directional terms. We derive a lower bound on the neighbourhood size to guarantee that estimated matrix coefficients will be within a predefined error with a prescribed probability. This bound is key for achieving an optimal trade‐off between smoothness and fine detail preservation. We also propose and compare different strategies for handling neighborhoods with samples coming from multiple materials and sensors. We show analytically that our method provides better normal estimates than competing approaches in noise conditions similar to those found in Lidar equipment.  相似文献   

7.
Image vectorisation is a fundamental method in graphic design and is one of the tools allowing to transfer artist work into computer graphics. The existing methods are based mainly on segmentation, or they analyse every image pixel; thus, they are relatively slow. We introduce a novel method for fast line drawing image vectorisation, based on a multi‐scale second derivative detector accelerated by the summed‐area table and an auxiliary grid. Image is scanned initially along the grid lines, and nodes are added to improve accuracy. Applying inertia in the line tracing allows for better junction mapping in a single pass. Our method is dedicated to grey‐scale sketches and line drawings. It works efficiently regardless of the thickness of the line or its shading. Experiments show it is more than two orders of magnitude faster than the existing methods, not sacrificing accuracy.  相似文献   

8.
In this work we present the first algorithm for restoring consistency between curve networks on non‐parallel cross‐sections. Our method addresses a critical but overlooked challenge in the reconstruction process from cross‐sections that stems from the fact that cross‐sectional slices are often generated independently of one another, such as in interactive volume segmentation. As a result, the curve networks on two non‐parallel slices may disagree where the slices intersect, which makes these cross‐sections an invalid input for surfacing. We propose a method that takes as input an arbitrary number of non‐parallel slices, each partitioned into two or more labels by a curve network, and outputs a modified set of curve networks on these slices that are guaranteed to be consistent. We formulate the task of restoring consistency while preserving the shape of input curves as a constrained optimization problem, and we propose an effective solution framework. We demonstrate our method on a data‐set of complex multi‐labeled input cross‐sections. Our technique efficiently produces consistent curve networks even in the presence of large errors.  相似文献   

9.
A major challenge in line drawing vectorization is segmenting the input bitmap into separate curves. This segmentation is especially problematic for rough sketches, where curves are depicted using multiple overdrawn strokes. Inspired by feature-aligned mesh quadrangulation methods in geometry processing, we propose to extract vector curve networks by parametrizing the image with local drawing-aligned integer grids. The regular structure of the grid facilitates the extraction of clean line junctions; due to the grid's discrete nature, nearby strokes are implicitly grouped together. We demonstrate that our method successfully vectorizes both clean and rough line drawings, whereas previous methods focused on only one of those drawing types.  相似文献   

10.
Freeform surfaces whose principal curvature line network is regularly distributed, are essential to many real applications like CAD modeling, architecture design, and industrial fabrication. However, most designed surfaces do not hold this nice property because it is hard to enforce such constraints in the design process. In this paper, we present a novel method for surface fairing which takes a regular distribution of the principal curvature line network on a surface as an objective. Our method first removes the high‐frequency signals from the curvature tensor field of an input freeform surface by a novel rolling guidance tensor filter, which results in a more regular and smooth curvature tensor field, then deforms the input surface to match the smoothed field as much as possible. As an application, we solve the problem of approximating freeform surfaces with regular principal curvature line networks, discretized by quadrilateral meshes. By introducing the circular or conical conditions on the quadrilateral mesh to guarantee the existence of discrete principal curvature line networks, and minimizing the approximate error to the original surface and improving the fairness of the quad mesh, we obtain a regular discrete principal curvature line network that approximates the original surface. We evaluate the efficacy of our method on various freeform surfaces and demonstrate the superiority of the rolling guidance tensor filter over other tensor smoothing techniques. We also utilize our method to generate high‐quality circular/conical meshes for architecture design and cyclide spline surfaces for CAD modeling.  相似文献   

11.
As the number of models for 3D indoor scenes are increasing rapidly, methods for generating the lighting layout have also become increasingly important. This paper presents a novel method that creates optimal placements and intensities of a set of lights in indoor scenes. Our method is characterized by designing the objective functions for the optimization based on the lighting guidelines used in the interior design field. Specifically, to apply major elements of the lighting guideline, we identify three criteria, namely the structure, function, and aesthetics, that are suitable for the virtual space and quantify them through a set of objective terms: pairwise relation, hierarchy, circulation, illuminance, and collision. Given an indoor scene with properly arranged furniture as input, our method combines the procedural and optimization‐based approaches to generate lighting layouts appropriate to the geometric and functional characteristics of the input scene. The effectiveness of our method is demonstrated with an ablation study of cost terms for the optimization and a user study for perceptual evaluation.  相似文献   

12.
In this work, we introduce multi‐column graph convolutional networks (MGCNs), a deep generative model for 3D mesh surfaces that effectively learns a non‐linear facial representation. We perform spectral decomposition of meshes and apply convolutions directly in the frequency domain. Our network architecture involves multiple columns of graph convolutional networks (GCNs), namely large GCN (L‐GCN), medium GCN (M‐GCN) and small GCN (S‐GCN), with different filter sizes to extract features at different scales. L‐GCN is more useful to extract large‐scale features, whereas S‐GCN is effective for extracting subtle and fine‐grained features, and M‐GCN captures information in between. Therefore, to obtain a high‐quality representation, we propose a selective fusion method that adaptively integrates these three kinds of information. Spatially non‐local relationships are also exploited through a self‐attention mechanism to further improve the representation ability in the latent vector space. Through extensive experiments, we demonstrate the superiority of our end‐to‐end framework in improving the accuracy of 3D face reconstruction. Moreover, with the help of variational inference, our model has excellent generating ability.  相似文献   

13.
Naïve linear methods for downsampling high‐resolution microflake volumes often produce inaccurate appearance, especially when input voxels are very opaque. Preserving correct appearance at all resolutions requires taking into account maskingshadowing effects that occur between and inside dense input voxels. We introduce a new microflake model whose additional parameters characterize self‐shadowing effects at a microscopic scale. We provide an anisotropic self‐shadowing function and microflake distributions for which the scattering coefficients and the phase functions of our model have closed‐form expressions. We use this model in a new downsampling approach in which scattering parameters are computed from local estimations of self‐shadowing probabilities in the input volume. Unlike previous work, our method handles datasets with spatially varying scattering parameters, semi‐transparent volumes and datasets with intricate silhouettes. We show that our method generates LoDs with correct transparency and consistent appearance through scales for a wide range of challenging datasets, allowing for huge memory savings and efficient distant rendering without loss of quality.  相似文献   

14.
State-of-the-art methods for line drawing vectorization rely on generated frame fields for robust direction disambiguation, with each of the two axes aligning to different intersecting curve tangents around junctions. However, a common source of topological error for such methods are frame field singularities. To remedy this, we introduce the first frame field optimization framework guaranteed to produce singularity-free fields aligned to a line drawing. We first perform a convex solve for a roughly-aligned orthogonal frame field (cross field), and then comb away its internal singularities with an optimal transport–based matching. The resulting topology of the field is strictly maintained with the machinery of discrete trivial connections in a final, non-convex optimization that allows non-orthogonality of the field, improving smoothness and tangent alignment. Our frame fields can serve as a drop-in replacement for frame field optimizations used in previous work, improving the quality of the final vectorizations.  相似文献   

15.
Landscape models of geospatial regions provide an intuitive mechanism for exploring complex geospatial information. However, the methods currently used to create these scale models require a large amount of resources, which restricts the availability of these models to a limited number of popular public places, such as museums and airports. In this paper, we have proposed a system for creating these physical models using an affordable 3D printer in order to make the creation of these models more widely accessible. Our system retrieves GIS relevant to creating a physical model of a geospatial region and then addresses the two major limitations of affordable 3D printers, namely the limited number of materials and available printing volume. This is accomplished by separating features into distinct extruded layers and splitting large models into smaller pieces, allowing us to employ different methods for the visualization of different geospatial features, like vegetation and residential areas, in a 3D printing context. We confirm the functionality of our system by printing two large physical models of relatively complex landscape regions.  相似文献   

16.
Cosine‐Weighted B‐spline (CWB) interpolation [ Csé13 ] has been originally proposed for volumetric data sampled on the Body‐Centered Cubic (BCC) lattice. The BCC lattice is well known to be optimal for sampling isotropically band‐limited signals above the Nyquist limit. However, the Face‐Centered Cubic (FCC) lattice has been recently proven to be optimal for low‐rate sampling. The CWB interpolation is a state‐of‐the‐art technique on the BCC lattice, which outperforms, for example, the previously proposed box‐spline interpolation in terms of both efficiency and visual quality. In this paper, we show that CWB interpolation can be adapted to the FCC lattice as well, and results in similarly isotropic signal reconstructions as on the BCC lattice.  相似文献   

17.
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak.  相似文献   

18.
The combinatorial dual of a hex mesh induces a collection of mutually intersecting surfaces (dual sheets). Inspired by Campen et al.'s work on quad meshing [CBK12, CK14], we propose to directly generate such dual sheets so that, as long as the volume is properly partitioned by the dual sheets, we are guaranteed to arrive at a valid all‐hex mesh topology. Since automatically generating dual sheets seems much harder than the 2D counterpart, we chose to leave the task to the user; our system is equipped with a few simple 3D modeling tools for interactively designing dual sheets. Dual sheets are represented as implicit surfaces in our approach, greatly simplifying many of the computational steps such as finding intersections and analyzing topology. We also propose a simple algorithm for primalizing the dual graph where each dual cell, often enclosing singular edges, gets mapped onto a reference polyhedron via harmonic parameterization. Preservation of sharp features is simply achieved by modifying the boundary conditions. We demonstrate the feasibility of our approach through various modeling examples.  相似文献   

19.
Dual paraboloid mapping is an approach for environment mapping. Its major advantage is its fast map generation speed. For graphics applications, when filtering is needed, the filtering tool would naturally be mipmapping. However, directly applying mipmapping to dual paraboloid mapping would give us three problems. They are the discontinuity across the dual paraboloid map boundary, the non‐uniform sampling problem and the depth testing issue. We propose three approaches to solve these problems. Our approaches are based on some closed form equations derived via theoretical analysis. Using these equations, we modify the coordinates involved during the rendering process. In other words, these problems are handled just by using dual paraboloid maps and mipmaps differently, instead of fundamentally altering their data structures. Consequently, we are fixing the problems without damaging the map generation speed advantage. Applying all three approaches, we improve the rendering quality of dual paraboloid map mipmaps to a level equivalent to that of cubemap mipmaps, while preserving its fast map generation speed advantage. This gives dual paraboloid map mipmaps the potential to be a better choice than cubemap mipmaps for the devices with less computational power. The effectiveness and the efficiency of the proposed approaches are demonstrated using a glossy reflection application and an omnidirectional soft shadow generation application.  相似文献   

20.
It is well known that cubic texture filtering can be efficiently implemented on the GPU by using a method published by Sigg and Hadwiger [ SH05 ], which simplifies the evaluation to a linear combination of linear texture fetches. However, their method cannot be directly applied if the filter kernel takes also negative values like the popular Catmull‐Rom spline, for example. In this paper, we propose a modified algorithm that is able to handle also the negative weights. Therefore, using our method, the Catmull‐Rom spline interpolation can also be evaluated in one, two, and three dimensions by taking two, four, and eight linear texture fetches, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号