首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A recent trend in interactive modeling of 3D shapes from a single image is designing minimal interfaces, and accompanying algorithms, for modeling a specific class of objects. Expanding upon the range of shapes that existing minimal interfaces can model, we present an interactive image‐guided tool for modeling shapes made up of extruded parts. An extruded part is represented by extruding a closed planar curve, called base, in the direction orthogonal to the base. To model each extruded part, the user only needs to sketch the projected base shape in the image. The main technical contribution is a novel optimization‐based approach for recovering the 3D normal of the base of an extruded object by exploring both geometric regularity of the sketched curve and image contents. We developed a convenient interface for modeling multi‐part shapes and a method for optimizing the relative placement of the parts. Our tool is validated using synthetic data and tested on real‐world images.  相似文献   

2.
Modeling 3D objects on existing software usually requires a heavy amount of interactions, especially for users who lack basic knowledge of 3D geometry. Sketch‐based modeling is a solution to ease the modelling procedure and thus has been researched for decades. However, modelling a man‐made shape with complex structures remains challenging. Existing methods adopt advanced deep learning techniques to map holistic sketches to 3D shapes. They are still bottlenecked to deal with complicated topologies. In this paper, we decouple the task of sketch2shape into a part generation module and a part assembling module, where deep learning methods are leveraged for the implementation of both modules. By changing the focus from holistic shapes to individual parts, it eases the learning process of the shape generator and guarantees high‐quality outputs. With the learned automated part assembler, users only need a little manual tuning to obtain a desired layout. Extensive experiments and user studies demonstrate the usefulness of our proposed system.  相似文献   

3.
Creation of detailed character models is a very challenging task in animation production. Sketch‐based character model creation from a 3D template provides a promising solution. However, how to quickly find correct correspondences between user's drawn sketches and the 3D template model, how to efficiently deform the 3D template model to exactly match user's drawn sketches, and realize real‐time interactive modeling is still an open topic. In this paper, we propose a new approach and develop a user interface to effectively tackle this problem. Our proposed approach includes using user's drawn sketches to retrieve a most similar 3D template model from our dataset and marrying human's perception and interactions with computer's highly efficient computing to extract occluding and silhouette contours of the 3D template model and find correct correspondences quickly. We then combine skeleton‐based deformation and mesh editing to deform the 3D template model to fit user's drawn sketches and create new and detailed 3D character models. The results presented in this paper demonstrate the effectiveness and advantages of our proposed approach and usefulness of our developed user interface. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Many casually taken ‘tourist’ photographs comprise of architectural objects like houses, buildings, etc. Reconstructing such 3D scenes captured in a single photograph is a very challenging problem. We propose a novel approach to reconstruct such architectural scenes with minimal and simple user interaction, with the goal of providing 3D navigational capability to an image rather than acquiring accurate geometric detail. Our system, Peek‐in‐the‐Pic, is based on a sketch‐based geometry reconstruction paradigm. Given an image, the user simply traces out objects from it. Our system regards these as perspective line drawings, automatically completes them and reconstructs geometry from them. We make basic assumptions about the structure of traced objects and provide simple gestures for placing additional constraints. We also provide a simple sketching tool to progressively complete parts of the reconstructed buildings that are not visible in the image and cannot be automatically completed. Finally, we fill holes created in the original image when reconstructed buildings are removed from it, by automatic texture synthesis. Users can spend more time using interactive texture synthesis for further refining the image. Thus, instead of looking at flat images, a user can fly through them after some simple processing. Minimal manual work, ease of use and interactivity are the salient features of our approach.  相似文献   

5.
Several applications in shape modeling and exploration require identification and extraction of a 3D shape part matching a 2D sketch. We present CustomCut, an on‐demand part extraction algorithm. Given a sketched query, CustomCut automatically retrieves partially matching shapes from a database, identifies the region optimally matching the query in each shape, and extracts this region to produce a customized part that can be used in various modeling applications. In contrast to earlier work on sketch‐based retrieval of predefined parts, our approach can extract arbitrary parts from input shapes and does not rely on a prior segmentation into semantic components. The method is based on a novel data structure for fast retrieval of partial matches: the randomized compound k‐NN graph built on multi‐view shape projections. We also employ a coarse‐to‐fine strategy to progressively refine part boundaries down to the level of individual faces. Experimental results indicate that our approach provides an intuitive and easy means to extract customized parts from a shape database, and significantly expands the design space for the user. We demonstrate several applications of our method to shape design and exploration.  相似文献   

6.
We present the design of an interactive image‐based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre‐processing of the input images. One central component of our tool is a GPU‐based multi‐view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.  相似文献   

7.
Modeling 3D objects is difficult, especially for the user who lacks the knowledge on 3D geometry or even on 2D sketching. In this paper, we present a novel sketch‐based modeling system which allows novice users to create 3D custom models by assembling parts based on a database of pre‐segmented 3D models. Different from previous systems, our system supports the user with visualized and meaningful shadow guidance under his strokes dynamically to guide the user to convey his design concept easily and quickly. Our system interprets the user's strokes as similarity queries into database to generate the shadow image for guiding the user's further drawing and returns the 3D candidate parts for modeling simultaneously. Moreover, our system preserves the high‐level structure in generated models based on prior knowledge pre‐analyzed from the database, and allows the user to create custom parts with geometric variations. We demonstrate the applicability and effectiveness of our modeling system with human subjects and present various models designed using our system.  相似文献   

8.
We present an assistive system for clipart design by providing visual scaffolds from the unseen viewpoints. Inspired by the artists' creation process, our system constructs the visual scaffold by first synthesizing the reference 3D shape of the input clipart and rendering it from the desired viewpoint. The critical challenge of constructing this visual scaffold is to generate a reference 3D shape that matches the user's expectations in terms of object sizing and positioning while preserving the geometric style of the input clipart. To address this challenge, we propose a user‐assisted curve extrusion method to obtain the reference 3D shape. We render the synthesized reference 3D shape with a consistent style into the visual scaffold. By following the generated visual scaffold, the users can efficiently design clipart with their desired viewpoints. The user study conducted by an intuitive user interface and our generated visual scaffold suggests that our system is especially useful for estimating the ratio and scale between object parts and can save on average 57% of drawing time.  相似文献   

9.
We propose a hybrid signed distance field (SDF) method for reconstructing the detailed surface of a model as it changes from a solid state to a liquid state. Previous particle‐based fluid simulations suffer from a noisy surface problem when the particles are distributed irregularly. If a smoothing scheme is applied to reduce the problem, sharp and detailed features can be lost by over‐smoothing artifacts. Our method constructs a hybrid SDF by combining level‐set values from the solid and liquid parts of the object. This makes it possible to represent the detailed features and smooth surfaces of an object when both solid and liquid parts are mixed in that object. In addition, the concept of a guiding shape is proposed, which uses a coordinate‐warping technique to query the level‐set values quickly. The guiding shape is constructed from the object before the simulation begins and some parts of it become liquid. To track the details of the initial solid shape and preserve it, the transformation of the guiding shape is accumulated while the phase‐shift is in progress. By warping the coordinates of this accumulated transformation of the guiding shape, the level‐set values of the solid part can be acquired very quickly. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
In the past decade, home appliances have been rapidly developed to satisfy the various requirements of users. Thus, there are increases in requirements to explore appropriate evaluation methods that reflect the usability of home appliances quickly and comprehensively. This study aims to develop a scenario‐based usability checklist for product designers early in the design process. In this study, the scenario‐based usability checklist consists of two parts: 1) a heuristic evaluation checklist, and 2) a scenario evaluation checklist. In heuristic evaluation checklist development, usability factors of home appliances are extracted and then coupled to user interface (UI) elements. In scenario evaluation checklist development, scenarios are developed through brainstorming by focus group interview (FGI), and then evaluation elements are extracted from previous scenarios analysis. The proposed scenario‐based usability checklists can enable designers to evaluate product design quickly and comprehensively early in the development process with users' viewpoints. © 2010 Wiley Periodicals, Inc.  相似文献   

11.
As an art form between drawing and sculpture, relief has been widely used in a variety of media for signs, narratives, decorations and other purposes. Traditional relief creation relies on both professional skills and artistic expertise, which is extremely time‐consuming. Recently, automatic or semi‐automatic relief modelling from a 3D object or a 2D image has been a subject of interest in computer graphics. Various methods have been proposed to generate reliefs with few user interactions or minor human efforts, while preserving or enhancing the appearance of the input. This survey provides a comprehensive review of the advances in computer‐assisted relief modelling during the past decade. First, we provide an overview of relief types and their art characteristics. Then, we introduce the key techniques of object‐space methods and image‐space methods respectively. Advantages and limitations of each category are discussed in details. We conclude the report by discussing directions for possible future research.  相似文献   

12.
Graphical user interfaces are not always developed for remaining static. There are GUIs with the need of implementing some variability mechanisms. Component‐based GUIs are an ideal target for incorporating this kind of operations, because they can adapt their functionality at run‐time when their structure is updated by adding or removing components or by modifying the relationships between them. Mashup user interfaces are a good example of this type of GUI, and they allow to combine services through the assembly of graphical components. We intend to adapt component‐based user interfaces for obtaining smart user interfaces. With this goal, our proposal attempts to adapt abstract component‐based architectures by using model transformation. Our aim is to generate at run‐time a dynamic model transformation, because the rules describing their behavior are not pre‐set but are selected from a repository depending on the context. The proposal describes an adaptation schema based on model transformation providing a solution to this dynamic transformation. Context information is processed to select at run‐time a rule subset from a repository. Selected rules are used to generate, through a higher‐order transformation, the dynamic model transformation. This approach has been tested through a case study which applies different repositories to the same architecture and context. Moreover, a web tool has been developed for validation and demonstration of its applicability. The novelty of our proposal arises from the adaptation schema that creates a non pre‐set transformation, which enables the dynamic adaptation of component‐based architectures. Copyright © 2014 Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents a multi‐resolutional surface deformable model with physical property adjustment scheme and shape‐preserving springs to represent surface‐deformable objects efficiently and robustly. In order to reduce the computational complexity while ensuring the same global volumetric behaviour for the deformable object, we introduce a multi‐resolutional mass‐spring model that is locally refined using the modified‐butterfly subdivision scheme. For robust deformation, a shape‐preserving spring, which helps to restore the model to the original shape, is proposed to reduce the animation instability. Volume and shape preservation is indirectly achieved by restoring the model to the original shape without computing the actual volume and associated forces at every iteration. Most existing methods concentrate on visual realism of multi‐resolutional deformation and often neglect to maintain the dynamic behavioural integrity between detail levels. In order to preserve overall physical behaviour, we present a new scheme for adjusting physical properties between different levels of details. During the animation of deformable objects, the part of the object under external forces beyond a threshold or with large surface curvature variations is refined with a higher level of detail. The physical properties of nodes and springs in the locally refined area are adjusted in order to preserve the total mass and global behaviour of the object. The adequacy of the proposed scheme was analysed with tests using practical mesh examples. Experimental results demonstrate improved efficiency in object deformation and preservation of overall behaviour between different levels. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
We present an interactive method that allows users to easily abstract complex 3D models with only a few strokes. The key idea is to employ well‐known Gestalt principles to help generalizing user inputs into a full model abstraction while accounting for form, perceptual patterns and semantics of the model. Using these principles, we alleviate the user's need to explicitly define shape abstractions. We utilize structural characteristics such as repetitions, regularity and similarity to transform user strokes into full 3D abstractions. As the user sketches over shape elements, we identify Gestalt groups and later abstract them to maintain their structural meaning. Unlike previous approaches, we operate directly on the geometric elements, in a sense applying Gestalt principles in 3D. We demonstrate the effectiveness of our approach with a series of experiments, including a variety of complex models and two extensive user studies to evaluate our framework.  相似文献   

15.
In this paper, we propose a controllable embedding method for high‐ and low‐dimensional geometry processing through sparse matrix eigenanalysis. Our approach is equally suitable to perform non‐linear dimensionality reduction on big data, or to offer non‐linear shape editing of 3D meshes and pointsets. At the core of our approach is the construction of a multi‐Laplacian quadratic form that is assembled from local operators whose kernels only contain locally‐affine functions. Minimizing this quadratic form provides an embedding that best preserves all relative coordinates of points within their local neighborhoods. We demonstrate the improvements that our approach brings over existing nonlinear dimensionality reduction methods on a number of datasets, and formulate the first eigen‐based as‐rigid‐as‐possible shape deformation technique by applying our affine‐kernel embedding approach to 3D data augmented with user‐imposed constraints on select vertices.  相似文献   

16.
We add new modality to image‐based visualization by converting ordinary photos into tangible images, which can be then haptically rendered. This is performed by interactive sketching haptic models on the photos so that the models match the image parts, which will become tangible. In contrast to common geometric modelling, we define the haptic models in a three‐dimensional haptic modelling space distorted by the central projection. Analytic FRep functions (variants of implicit functions) are mostly used for defining the haptic models. The tangible images thus created can realistically simulate some actual three‐dimensional scenes by implementing the principle “What You See Is What You Touch” while in fact still be 2D images. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This paper presents a method that can convert a given 3D mesh into a flat‐foldable model consisting of rigid panels. A previous work proposed a method to assist manual design of a single component of such flat‐foldable model, consisting of vertically‐connected side panels as well as horizontal top and bottom panels. Our method semi‐automatically generates a more complicated model that approximates the input mesh with multiple convex components. The user specifies the folding direction of each convex component and the fidelity of shape approximation. Given the user inputs, our method optimizes shapes and positions of panels of each convex component in order to make the whole model flat‐foldable. The user can check a folding animation of the output model. We demonstrate the effectiveness of our method by fabricating physical paper prototypes of flat‐foldable models.  相似文献   

18.
Buildings with symmetrical façades are ubiquitous in urban landscapes and detailed models of these buildings enhance the visual realism of digital urban scenes. However, a vast majority of the existing urban building models in web‐based 3D maps such as Google earth are either less detailed or heavily rely on texturing to render the details. We present a new framework for enhancing the details of such coarse models, using the geometry and symmetry inferred from the light detection and ranging (LiDAR) scans and 2D templates. The user‐defined 2D templates, referred to as coded planar meshes (CPMs), encodes the geometry of the smallest repeating 3D structures of the façades via face codes. Our encoding scheme, take into account the directions, type as well as the offset distance of the sculpting to be applied at the respective locations on the coarse model. In our approach, LiDAR scan is registered with the coarse models taken from Google earth 3D or Bing maps 3D and decomposed into dominant planar segments (each representing the frontal or lateral walls of the building). The façade segments are then split into horizontal and vertical tiles using a weighted point count function defined over the window or door boundaries. This is followed by an automatic identification of CPM locations with the help of a template fitting algorithm that respects the alignment regularity as well as the inter‐element spacing on the façade layout. Finally, 3D boolean sculpting operations are applied over the boxes induced by CPMs and the coarse model, and a detailed 3D model is generated. The proposed framework is capable of modelling details even with occluded scans and enhances not only the frontal façades (facing to the streets) but also the lateral façades of the buildings. We demonstrate the potentials of the proposed framework by providing several examples of enhanced Google earth models and highlight the advantages of our method when designing photo‐realistic urban façades.  相似文献   

19.
20.
Most visual diagramming tools provide point‐and‐click construction of computer‐drawn diagram elements using a conventional desktop computer and mouse. SUMLOW is a unified modelling language (UML) diagramming tool that uses an electronic whiteboard (E‐whiteboard) and sketching‐based user interface to support collaborative software design. SUMLOW allows designers to sketch UML constructs, mixing different UML diagram elements, diagram annotations, and hand‐drawn text. A key novelty of the tool is the preservation of hand‐drawn diagrams and support for manipulation of these sketches using pen‐based actions. Sketched diagrams can be automatically ‘formalized’ into computer‐recognized and ‐drawn UML diagrams and then exported to a third party CASE tool for further extension and use. We describe the motivation for SUMLOW, illustrate the use of the tool to sketch various UML diagram types, describe its key architecture abstractions and implementation approaches, and report on two evaluations of the toolset. We hope that our experiences will be useful for others developing sketching‐based design tools or those looking to leverage pen‐based interfaces in software applications. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号