首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one‐degree‐of freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object.  相似文献   

2.
In this paper, we provide a smooth extension of the energy aware Gauss‐Seidel iteration to the Position‐Based Dynamics (PBD) method. This extension is inspired by the kinetic and potential energy changes equalization and uses the foundations of the recent extended version of PBD algorithm (XPBD). The proposed method is not meant to conserve the total energy of the system and modifies each position constraint based on the equality of the kinetic and potential energy changes within the Gauss‐Seidel process of the XPBD algorithm. Our extension provides an implicit solution for relatively better stiffness during the simulation of elastic objects. We apply our solution directly within each Gauss‐Seidel iteration and it is independent of both simulation step‐size and integration methods. To demonstrate the benefits of our proposed extension with higher frame rates, we develop an efficient and practical mesh coloring algorithm for the XPBD method which provides parallel processing on a GPU. During the initialization phase, all mesh primitives are grouped according to their connectivity. Afterwards, all these groups are computed simultaneously on a GPU during the simulation phase. We demonstrate the benefits of our method with many spring potential and strain‐based continuous material constraints. Our proposed algorithm is easy to implement and seamlessly fits into the existing position‐based frameworks.  相似文献   

3.
We present a grid‐based fluid solver for simulating viscous materials and their interactions with solid objects. Our method formulates the implicit viscosity integration as a minimization problem with consistently estimated volume fractions to account for the sub‐grid details of free surfaces and solid boundaries. To handle the interplay between fluids and solid objects with viscosity forces, we also formulate the two‐way fluid‐solid coupling as a unified minimization problem based on the variational principle, which naturally enforces the boundary conditions. Our formulation leads to a symmetric positive definite linear system with a sparse matrix regardless of the monolithically coupled solid objects. Additionally, we present a position‐correction method using density constraints to enforce the uniform distributions of fluid particles and thus prevent the loss of fluid volumes. We demonstrate the effectiveness of our method in a wide range of viscous fluid scenarios.  相似文献   

4.
We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.  相似文献   

5.
In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of our algorithm is comparable to state‐of‐the‐art GPU Poisson‐disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties. The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility‐aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point‐based level‐of‐detail rendering methods. However, our method can be easily extended for sampling the entire surface without changing the basic algorithm. We provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of our method compared to related state‐of‐the‐art sampling methods.  相似文献   

6.
We propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB‐D camera, without the need of a multiple‐camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB‐D image sequence, and geometric warping fields are found using a state‐of‐the‐art non‐rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi‐scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters.  相似文献   

7.
Fused deposition modeling based 3D‐printing is becoming increasingly popular due to it's low‐cost and simple operation and maintenance. While it produces rugged prints made from a wide range of materials, it suffers from an inherent printing limitation where it cannot produce overhanging surfaces of non‐trivial size. This limitation can be handled by constructing temporary support‐structures, however this solution involves additional material costs, longer print time, and often a fair amount of labor in removing it. In this paper we present a new method for partitioning general solid objects into a small number of parts that can be printed with no support. The partitioning is computed by applying a sequence of cutting‐planes that split the object recursively. Unlike existing algorithms, the planes are not chosen at random, rather they are derived from shape analysis routines that identify and resolve various commonly‐found geometric configurations. In addition, we guide this search by a revised set of conditions that both ensure the objects' printability as well as realistically model the printing capabilities of the printer at hand. Evaluation of the new method demonstrates its ability to efficiently obtain support‐free partitionings typically containing fewer parts compared to existing methods that rely on support‐structures.  相似文献   

8.
9.
We present a novel and light‐weight approach to capture and reconstruct structured 3D models of multi‐room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real‐world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane‐sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top‐down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi‐room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real‐world and synthetic models.  相似文献   

10.
Selecting informative and visually appealing views for 3D indoor scenes is beneficial for the housing, decoration, and entertainment industries. A set of views that exhibit comfort, aesthetics, and functionality of a particular scene can attract customers and facilitate business transactions. However, selecting views for an indoor scene is challenging because the system has to consider not only the need to reveal as much information as possible, but also object arrangements, occlusions, and characteristics. Since there can be many principles utilized to guide the view selection, and various principles to follow under different circumstances, we achieve the goal by imitating popular photos on the Internet. Specifically, we select the view that can optimize the contour similarity of corresponding objects to the photo. Because the selected view can be inadequate if object arrangements in the 3D scene and the photo are different, our system imitates many popular photos and selects a certain number of views. After that, it clusters the selected views and determines the view/cluster centers by the weighted average to finally exhibit the scene. Experimental results demonstrate that the views selected by our method are visually appealing.  相似文献   

11.
We present a novel method to compute bijective PolyCube‐maps with low isometric distortion. Given a surface and its pre‐axis‐aligned shape that is not an exact PolyCube shape, the algorithm contains two steps: (i) construct a PolyCube shape to approximate the pre‐axis‐aligned shape; and (ii) generate a bijective, low isometric distortion mapping between the constructed PolyCube shape and the input surface. The PolyCube construction is formulated as a constrained optimization problem, where the objective is the number of corners in the constructed PolyCube, and the constraint is to bound the approximation error between the constructed PolyCube and the input pre‐axis‐aligned shape while ensuring topological validity. A novel erasing‐and‐filling solver is proposed to solve this challenging problem. Centeral to the algorithm for computing bijective PolyCube‐maps is a quad mesh optimization process that projects the constructed PolyCube onto the input surface with high‐quality quads. We demonstrate the efficacy of our algorithm on a data set containing 300 closed meshes. Compared to state‐of‐the‐art methods, our method achieves higher practical robustness and lower mapping distortion.  相似文献   

12.
While analysing and synthesising 2D distributions of points has been applied both to the generation of textures with discrete elements and for populating virtual worlds with 3D objects, the results are often inaccurate since the spatial extent of objects cannot be expressed. We introduce three improvements enabling the synthesis of more general distributions of elements. First, we extend continuous pair correlation function (PCF) algorithms to multi‐class distributions using a dependency graph, thereby capturing interrelationships between distinct categories of objects. Second, we introduce a new normalised metric for disks, which makes the method applicable to both point and possibly overlapping disk distributions. The metric is specifically designed to distinguish perceptually salient features, such as disjoint, tangent, overlapping, or nested disks. Finally, we pay particular attention to convergence of the mean PCF as well as the validity of individual PCFs, by taking into consideration the variance of the input. Our results demonstrate that this framework can capture and reproduce real‐life distributions of elements representing a variety of complex semi‐structured patterns, from the interaction between trees and the understorey in a forest to droplets of water. More generally, it applies to any category of 2D object whose shape is better represented by bounding circles than points.  相似文献   

13.
Dissection puzzles require assembling a common set of pieces into multiple distinct forms. Existing works focus on creating 2D dissection puzzles that form primitive or naturalistic shapes. Unlike 2D dissection puzzles that could be supported on a tabletop surface, 3D dissection puzzles are preferable to be steady by themselves for each assembly form. In this work, we aim at computationally designing steady 3D dissection puzzles. We address this challenging problem with three key contributions. First, we take two voxelized shapes as inputs and dissect them into a common set of puzzle pieces, during which we allow slightly modifying the input shapes, preferably on their internal volume, to preserve the external appearance. Second, we formulate a formal model of generalized interlocking for connecting pieces into a steady assembly using both their geometric arrangements and friction. Third, we modify the geometry of each dissected puzzle piece based on the formal model such that each assembly form is steady accordingly. We demonstrate the effectiveness of our approach on a wide variety of shapes, compare it with the state‐of‐the‐art on 2D and 3D examples, and fabricate some of our designed puzzles to validate their steadiness.  相似文献   

14.
In this paper, we present a practically robust method for computing foldover‐free volumetric mappings with hard linear constraints. Central to this approach is a projection algorithm that monotonically and efficiently decreases the distance from the mapping to the bounded conformal distortion mapping space. After projection, the conformal distortion of the updated mapping tends to be below the given bound, thereby significantly reducing foldovers. Since it is non‐trivial to define an optimal bound, we introduce a practical conformal distortion bound generation scheme to facilitate subsequent projections. By iteratively generating conformal distortion bounds and trying to project mappings into bounded conformal distortion spaces monotonically, our algorithm achieves high‐quality foldover‐free volumetric mappings with strong practical robustness and high efficiency. Compared with existing methods, our method computes mesh‐based and meshless volumetric mappings with no prescribed conformal distortion bounds. We demonstrate the efficacy and efficiency of our method through a variety of geometric processing tasks.  相似文献   

15.
We present a novel example‐based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on‐site appearance acquisition to a lightweight photography process suited for non‐expert users. As our central contribution, we propose a shape‐agnostic BRDF estimation procedure based on binary RGB profile matching. We also model the appearance of materials exhibiting a regular or stationary texture‐like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on‐site shape‐agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible “rapid‐appearance‐modeling”.  相似文献   

16.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

17.
In this paper, we present a new computational method for automatically tracing high‐resolution photographs to create expressive line drawings. We define expressive lines as those that convey important edges, shape contours, and large‐scale texture lines that are necessary to accurately depict the overall structure of objects (similar to those found in technical drawings) while still being sparse and artistically pleasing. Given a photograph, our algorithm extracts expressive edges and creates a clean line drawing using a convolutional neural network (CNN). We employ an end‐to‐end trainable fully‐convolutional CNN to learn the model in a data‐driven manner. The model consists of two networks to cope with two sub‐tasks; extracting coarse lines and refining them to be more clean and expressive. To build a model that is optimal for each domain, we construct two new datasets for face/body and manga background. The experimental results qualitatively and quantitatively demonstrate the effectiveness of our model. We further illustrate two practical applications.  相似文献   

18.
In many scientific disciplines, the motion of finite‐sized objects in fluid flows plays an important role, such as in brownout engineering, sediment transport, oceanology or meteorology. These finite‐sized objects are called inertial particles and, in contrast to traditional tracer particles, their motion depends on their current position, their own particle velocity, the time and their size. Thus, the visualization of their motion becomes a high‐dimensional problem that entails computational and perceptual challenges. So far, no visualization explored and visualized the particle trajectories under variation of all seeding parameters. In this paper, we propose three coordinated views that visualize the different aspects of the high‐dimensional space in which the particles live. We visualize the evolution of particles over time, showing that particles travel different distances in the same time, depending on their size. The second view provides a clear illustration of the trajectories of different particle sizes and allows the user to easily identify differences due to particle size. Finally, we embed the trajectories in the space‐velocity domain and visualize their distance to an attracting manifold using ribbons. In all views, we support interactive linking and brushing, and provide abstraction through density volumes that are shown by direct volume rendering and isosurface slabs. Using our method, users gain deeper insights into the dynamics of inertial particles in 2D fluids, including size‐dependent separation, preferential clustering and attraction. We demonstrate the effectiveness of our method in multiple steady and unsteady 2D flows.  相似文献   

19.
As the number of models for 3D indoor scenes are increasing rapidly, methods for generating the lighting layout have also become increasingly important. This paper presents a novel method that creates optimal placements and intensities of a set of lights in indoor scenes. Our method is characterized by designing the objective functions for the optimization based on the lighting guidelines used in the interior design field. Specifically, to apply major elements of the lighting guideline, we identify three criteria, namely the structure, function, and aesthetics, that are suitable for the virtual space and quantify them through a set of objective terms: pairwise relation, hierarchy, circulation, illuminance, and collision. Given an indoor scene with properly arranged furniture as input, our method combines the procedural and optimization‐based approaches to generate lighting layouts appropriate to the geometric and functional characteristics of the input scene. The effectiveness of our method is demonstrated with an ablation study of cost terms for the optimization and a user study for perceptual evaluation.  相似文献   

20.
We present an approach to fabricate shell objects with thickness parameters, which are computed to maintain the user‐specified structural stability. Given a boundary surface and user‐specified external forces, we optimize the thickness parameters according to stress constraints to extrude the surface. Our approach mainly consists of two technical components: First, we develop a patch‐based shell simulation technique to efficiently support the static simulation of extruded shell objects using finite element methods. Second, we analytically compute the derivative of stress required in the sensitivity analysis technique to turn the optimization into a sequential linear programming problem. Experimental results demonstrate that our approach can optimize the thickness parameters for arbitrary surfaces in a few minutes and well predict the physical properties, such as the deformation and stress of the fabricated object.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号