首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents the first practical method for "origamizing” or obtaining the folding pattern that folds a single sheet of material into a given polyhedral surface without any cut. The basic idea is to tuck fold a planar paper to form a three-dimensional shape. The main contribution is to solve the inverse problem; the input is an arbitrary polyhedral surface and the output is the folding pattern. Our approach is to convert this problem into a problem of laying out the polygons of the surface on a planar paper by introducing the concept of tucking molecules. We investigate the equality and inequality conditions required for constructing a valid crease pattern. We propose an algorithm based on two-step mapping and edge splitting to solve these conditions. The two-step mapping precalculates linear equalities and separates them from other conditions. This allows an interactive manipulation of the crease pattern in the system implementation. We present the first system for designing three-dimensional origami, enabling a user can interactively design complex spatial origami models that have not been realizable thus far.  相似文献   

2.
Soft Folding     
We introduce soft folding, a new interactive method for designing and exploring thin‐plate forms. A user specifies sharp and soft folds as two‐dimensional(2D) curves on a flat sheet, along with the fold magnitude and sharpness of each. Then, based on the soft folds, the system computes the three‐dimensional(3D) folded shape. Internally, the system first computes a fold field, which defines local folding operations on a flat sheet. A fold field is a generalization of a discrete fold graph in origami, replacing a graph with sharp folds with a continuous field with soft folds. Next, local patches are folded independently according to the fold field. Finally, a globally folded 3D shape is obtained by assembling the locally folded patches. This algorithm computes an approximation of 3D developable surfaces with user‐defined soft folds at an interactive speed. The user can later apply nonlinear physical simulation to generate more realistic results. Experimental results demonstrated that soft folding is effective for producing complex folded shapes with controllable sharpness.  相似文献   

3.
Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user‐clicked images. Our framework is based on a novel locally‐bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user‐clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness.  相似文献   

4.
An Example-based Procedural System for Element Arrangement   总被引:2,自引:0,他引:2  
We present a method for synthesizing two dimensional (2D) element arrangements from an example. The main idea is to combine texture synthesis techniques based‐on a local neighborhood comparison and procedural modeling systems based‐on local growth. Given a user‐specified reference pattern, our system analyzes neigh‐borhood information of each element by constructing connectivity. Our synthesis process starts with a single seed and progressively places elements one by one by searching a reference element which has local features that are the most similar to the target place of the synthesized pattern. To support creative design activities, we introduce three types of interaction for controlling global features of the resulting pattern, namely a spray tool, a flow field tool, and a boundary tool. We also introduce a global optimization process that helps to avoid local error concentrations. We illustrate the feasibility of our method by creating several types of 2D patterns.  相似文献   

5.
Modeling 3D objects is difficult, especially for the user who lacks the knowledge on 3D geometry or even on 2D sketching. In this paper, we present a novel sketch‐based modeling system which allows novice users to create 3D custom models by assembling parts based on a database of pre‐segmented 3D models. Different from previous systems, our system supports the user with visualized and meaningful shadow guidance under his strokes dynamically to guide the user to convey his design concept easily and quickly. Our system interprets the user's strokes as similarity queries into database to generate the shadow image for guiding the user's further drawing and returns the 3D candidate parts for modeling simultaneously. Moreover, our system preserves the high‐level structure in generated models based on prior knowledge pre‐analyzed from the database, and allows the user to create custom parts with geometric variations. We demonstrate the applicability and effectiveness of our modeling system with human subjects and present various models designed using our system.  相似文献   

6.
We present the design of an interactive image‐based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre‐processing of the input images. One central component of our tool is a GPU‐based multi‐view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.  相似文献   

7.
We propose an image editing system for repositioning objects in a single image based on the perspective of the scene. In our system, an input image is transformed into a layer structure that is composed of object layers and a background layer, and then the scene depth is computed from the ground region that is specified by the user using a simple boundary line. The object size and order of overlapping are automatically determined during the reposition based on the scene depth. In addition, our system enables the user to move shadows along with objects naturally by extracting the shadow mattes using only a few user‐specified scribbles. Finally, we demonstrate the versatility of our system through applications to depth‐of‐field effects, fog synthesis and 3D walkthrough in an image.  相似文献   

8.
This paper presents a method that can convert a given 3D mesh into a flat‐foldable model consisting of rigid panels. A previous work proposed a method to assist manual design of a single component of such flat‐foldable model, consisting of vertically‐connected side panels as well as horizontal top and bottom panels. Our method semi‐automatically generates a more complicated model that approximates the input mesh with multiple convex components. The user specifies the folding direction of each convex component and the fidelity of shape approximation. Given the user inputs, our method optimizes shapes and positions of panels of each convex component in order to make the whole model flat‐foldable. The user can check a folding animation of the output model. We demonstrate the effectiveness of our method by fabricating physical paper prototypes of flat‐foldable models.  相似文献   

9.
This paper presents a novel algorithm that enables the semi-automatic reconstruction of human-made structures (e.g., buildings) into piecewise planar 3D models from a single image. This allows the models to be readily used in virtual or augmented reality visual simulations or for data acquisition in 3D geographic information systems. Contrary to traditional labor-intensive but accurate single view reconstruction (SVR) solutions based purely on geometric constraints, and contrary to recent fully automatic albeit low-accuracy SVR algorithms based on statistical inference, the presented method achieves a compromise between speed and accuracy, leading to less user input and acceptable visual effects. The user input required in the presented approach is primarily a line drawing that represents an outline of the building to be reconstructed. Using this input, the developed method takes advantage of a newly proposed vanishing point (VP) detection algorithm that can simultaneously estimate multiple VPs in an image. With those VPs, the normal direction of planes—which are projected onto the image plane as polygons in the line drawing—can be automatically calculated. Following this step, a linear system similar to the traditional SVR solutions can be used to achieve 3D reconstruction. Experiments that demonstrate the efficacy and visual outcome of the developed method are also described, highlighting the method’s potential use for rapid geometric model building of surrounding structures in visual simulation of engineering processes.  相似文献   

10.
A recent trend in interactive modeling of 3D shapes from a single image is designing minimal interfaces, and accompanying algorithms, for modeling a specific class of objects. Expanding upon the range of shapes that existing minimal interfaces can model, we present an interactive image‐guided tool for modeling shapes made up of extruded parts. An extruded part is represented by extruding a closed planar curve, called base, in the direction orthogonal to the base. To model each extruded part, the user only needs to sketch the projected base shape in the image. The main technical contribution is a novel optimization‐based approach for recovering the 3D normal of the base of an extruded object by exploring both geometric regularity of the sketched curve and image contents. We developed a convenient interface for modeling multi‐part shapes and a method for optimizing the relative placement of the parts. Our tool is validated using synthetic data and tested on real‐world images.  相似文献   

11.
12.
Objects with various types of mechanical joints are among the most commonly built. Joints implement a vocabulary of simple constrained motions (kinematic pairs) that can be used to build more complex behaviors. Defining physically correct joint geometry is crucial both for realistic appearance of models during motion, as these are typically the only parts of geometry that stay in contact, and for fabrication. Direct design of joint geometry often requires more effort than the design of the rest of the object geometry, as it requires design of components that stay in precise contact, are aligned with other parts, and allow the desired range of motion. We present an interactive system for creating physically realizable joints with user‐controlled appearance. Our system minimizes or, in most cases, completely eliminates the need for the user to manipulate low‐level geometry of joints. This is achieved by automatically inferring a small number of plausible combinations of joint dimensions, placement and orientation from part geometry, with the user making the final high‐level selection based on object semantic. Through user studies, we demonstrate that functional results with a satisfying appearance can be obtained quickly by users with minimal modeling experience, offering a significant improvement in the time required for joint construction, compared to standard modeling approaches.  相似文献   

13.
Noisy volumetric details like clouds, grounds, plaster, bark, roughcast, etc. are frequently encountered in nature and bring an important contribution to the realism of outdoor scenes. We introduce a new interactive approach, easing the creation of procedural representations of “stochastic” volumetric details by using a single example photograph. Instead of attempting to reconstruct an accurate geometric representation from the photograph, we use a stochastic multi‐scale approach that fits parameters of a multi‐layered noise‐based 3D deformation model, using a multi‐resolution filter banks error metric. Once computed, visually similar details can be applied to arbitrary objects with a high degree of visual realism, since lighting and parallax effects are naturally taken into account. Our approach is inspired by image‐based techniques. In practice, the user supplies a photograph of an object covered by noisy details, provides a corresponding coarse approximation of the shape of this object as well as an estimated lighting condition (generally a light source direction). Our system then determines the corresponding noise‐based representation as well as some diffuse, ambient, specular and semi‐transparency reflectance parameters. The resulting details are fully procedural and, as such, have the advantage of extreme compactness, while they can be infinitely extended without repetition in order to cover huge surfaces.  相似文献   

14.
Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two‐dimensional (2D) video sessions and whether this type of teaching results in superior learning. The research goal was to compare learning in 2 virtual learning scenarios—on 2D displays and with an identical 3DS scenario. Participants watched a 2D or 3DS video of an instructor demonstrating a box origami paper‐folding task. We compared participants' folding test scores and self‐assessment questionnaires of the teaching scenarios and calculated their cognitive load index (CLI) based on electroencephalogram measurements during the observation periods. Results showed a highly significant difference between participants' folding test scores, CLI, and self‐assessment questionnaire results in 2D compared to 3DS sessions. Our findings indicate that employing stereoscopic 3D technology over 2D displays in the design of emerging virtual and augmented reality applications in distance learning has advantages.  相似文献   

15.
We present a method for analytically calculating an anti‐aliased rasterization of arbitrary polygons or fonts bounded by Bézier curves in 2D as well as oriented triangle meshes in 3D. Our algorithm rasterizes multiple resolutions simultaneously using a hierarchical wavelet representation and is robust to degenerate inputs. We show that using the simplest wavelet, the Haar basis, is equivalent to performing a box‐filter to the rasterized image. Because we evaluate wavelet coefficients through line integrals in 2D, we are able to derive analytic solutions for polygons that have Bézier curve boundaries of any order, and we provide solutions for quadratic and cubic curves. In 3D, we compute the wavelet coefficients through analytic surface integrals over triangle meshes and show how to do so in a computationally efficient manner.  相似文献   

16.
Recent advances in modeling tools enable non‐expert users to synthesize novel shapes by assembling parts extracted from model databases. A major challenge for these tools is to provide users with relevant parts, which is especially difficult for large repositories with significant geometric variations. In this paper we analyze unorganized collections of 3D models to facilitate explorative shape synthesis by providing high‐level feedback of possible synthesizable shapes. By jointly analyzing arrangements and shapes of parts across models, we hierarchically embed the models into low‐dimensional spaces. The user can then use the parameterization to explore the existing models by clicking in different areas or by selecting groups to zoom on specific shape clusters. More importantly, any point in the embedded space can be lifted to an arrangement of parts to provide an abstracted view of possible shape variations. The abstraction can further be realized by appropriately deforming parts from neighboring models to produce synthesized geometry. Our experiments show that users can rapidly generate plausible and diverse shapes using our system, which also performs favorably with respect to previous modeling tools.  相似文献   

17.
This paper introduces a design and fabrication pipeline for creating floating forms. Our method optimizes for buoyant equilibrium and stability of complex 3D shapes, applying a voxel‐carving technique to control the mass distribution. The resulting objects achieve a desired floating pose defined by a user‐specified waterline height and orientation. In order to enlarge the feasible design space, we explore novel ways to load the interior of a design using prefabricated components and casting techniques. 3D printing is employed for high‐precision fabrication. For larger scale designs we introduce a method for stacking lasercut planar pieces to create 3D objects in a quick and economic manner. We demonstrate fabricated designs of complex shape in a variety of floating poses.  相似文献   

18.
Video remains the method of choice for capturing temporal events. However, without access to the underlying 3D scene models, it remains difficult to make object level edits in a single video or across multiple videos. While it may be possible to explicitly reconstruct the 3D geometries to facilitate these edits, such a workflow is cumbersome, expensive, and tedious. In this work, we present a much simpler workflow to create plausible editing and mixing of raw video footage using only sparse structure points (SSP) directly recovered from the raw sequences. First, we utilize user‐scribbles to structure the point representations obtained using structure‐from‐motion on the input videos. The resultant structure points, even when noisy and sparse, are then used to enable various video edits in 3D, including view perturbation, keyframe animation, object duplication and transfer across videos, etc. Specifically, we describe how to synthesize object images from new views adopting a novel image‐based rendering technique using the SSPs as proxy for the missing 3D scene information. We propose a structure‐preserving image warping on multiple input frames adaptively selected from object video, followed by a spatio‐temporally coherent image stitching to compose the final object image. Simple planar shadows and depth maps are synthesized for objects to generate plausible video sequence mimicking real‐world interactions. We demonstrate our system on a variety of input videos to produce complex edits, which are otherwise difficult to achieve.  相似文献   

19.
Real‐time rendering of models with high polygon count is still an important issue in interactive computer graphics. A common way to improve rendering performance is to generate different levels of detail of a model. These are mostly computed using polygonal simplification techniques, which aim to reduce the number of polygons without significant loss of visual fidelity. Most existing algorithms use geometric error bounds, which are well‐suited for silhouette preservation. They ignore the fact that a much more aggressive simplification is possible in low‐contrast areas inside the model. The main contribution of this paper is an efficient simplification algorithm based on the human visual system. The key idea is to move the domain of error computation from image‐space to vertex‐space to avoid a costly per‐pixel comparison. This way the error estimation of a simplification operation can be accelerated significantly. To account for the human vision, we introduce a perceptually based metric depending on the contrast and spatial frequency of the model at a single vertex. Finally, we validate our approach with a user study.  相似文献   

20.
The design of video game environments, or levels, aims to control gameplay by steering the player through a sequence of designer‐controlled steps, while simultaneously providing a visually engaging experience. Traditionally these levels are painstakingly designed by hand, often from pre‐existing building blocks, or space templates. In this paper, we propose an algorithmic approach for automatically laying out game levels from user‐specified blocks. Our method allows designers to retain control of the gameplay flow via user‐specified level connectivity graphs, while relieving them from the tedious task of manually assembling the building blocks into a valid, plausible layout. Our method produces sequences of diverse layouts for the same input connectivity, allowing for repeated replay of a given level within a visually different, new environment. We support complex graph connectivities and various building block shapes, and are able to compute complex layouts in seconds. The two key components of our algorithm are the use of configuration spaces defining feasible relative positions of building blocks within a layout and a graph‐decomposition based layout strategy that leverages graph connectivity to speed up convergence and avoid local minima. Together these two tools quickly steer the solution toward feasible layouts. We demonstrate our method on a variety of real‐life inputs, and generate appealing layouts conforming to user specifications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号