首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one‐degree‐of freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object.  相似文献   

2.
Visualization of 4D Vector Field Topology   总被引:1,自引:0,他引:1       下载免费PDF全文
In this paper, we present an approach to the topological analysis of four‐dimensional vector fields. In analogy to traditional 2D and 3D vector field topology, we provide a classification and visual representation of critical points, together with a technique for extracting their invariant manifolds. For effective exploration of the resulting four‐dimensional structures, we present a 4D camera that provides concise representation by exploiting projection degeneracies, and a 4D clipping approach that avoids self‐intersection in the 3D projection. We exemplify the properties and the utility of our approach using specific synthetic cases.  相似文献   

3.
Sketching is a simple and natural way of expression and communication for humans. For this reason, it gains increasing popularity in human computer interaction, with the emergence of multitouch tablets and styluses. In recent years, sketch‐based interactive methods are widely used in many retrieval systems. In particular, a variety of sketch‐based 3D model retrieval works have been presented. However, almost all of these works focus on directly matching sketches with the projection views of 3D models, and they suffer from the large differences between the sketch drawing and the views of 3D models, leading to unsatisfying retrieval results. Therefore, in this paper, during the matching procedure in the retrieval, we propose to match the sketch with each 3D model from historical users instead of projection views. Yet since the sketches between the current user and the historical users can have big difference, we also aim to handle users' personalized deviations and differences. To this end, we leverage recommendation algorithms to estimate the drawing style characteristic similarity between the current user and historical users. Experimental results on the Large Scale Sketch Track Benchmark(SHREC14LSSTB) demonstrate that our method outperforms several state‐of‐the‐art methods.  相似文献   

4.
Projector phone use: practices and social implications   总被引:2,自引:0,他引:2  
Phones with integrated pico projectors are starting to be marketed as devices for business presentations and media viewing, and researchers are beginning to design projection-specific applications and interaction techniques to explore a broader array of possible uses. To begin to document how people use projector phones outside the laboratory, we present the results of a 4-week exploratory field study of naturalistic use of commodity projector phones. In our analysis, we consider how context, such as group size, relationships, and locale, influences projector phone use. A key observation is that users can readily exploit the new facilities of these devices to author interesting effects by employing representational techniques such as superimposition, scaling, translation, and motion. Thus, even the “basic” projector phone platform affords novel interaction modalities. Finally, we discuss the social implications of projector phone use for privacy and control, extrapolating from our observations to envision a future in which these devices are ubiquitous. With ubiquity, projector phone use may become problematic in public settings, motivating new rules of etiquette and perhaps laws, yet it may also engender new forms of creative expression.  相似文献   

5.
We propose a method for the data‐driven inference of temporal evolutions of physical functions with deep learning. More specifically, we target fluid flow problems, and we propose a novel LSTM‐based approach to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space‐time data sets. We demonstrate for the first time that dense 3D+time functions of physics system can be predicted within the latent spaces of neural networks, and we arrive at a neural‐network based simulation algorithm with significant practical speed‐ups. We highlight the capabilities of our method with a series of complex liquid simulations, and with a set of single‐phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.  相似文献   

6.
Three-dimensional capabilities on mobile devices are increasing, and the interactivity is becoming a key feature of these tools. It is expected that users will actively engage with the 3D content, instead of being passive consumers. Because touch-screens provide a direct means of interaction with 3D content by directly touching and manipulating 3D graphical elements, touch-based interaction is a natural and appealing style of input for 3D applications. However, developing 3D interaction techniques for handheld devices using touch-screens is not a straightforward task. One issue is that when interacting with 3D objects, users occlude the object with their fingers. Furthermore, because the user’s finger covers a large area of the screen, the smallest size of the object users can touch is limited. In this paper, we first inspect existing 3D interaction techniques based on their performance with handheld devices. Then, we present a set of precise Dual-Finger 3D Interaction Techniques for a small display. Finally, we present the results of an experimental study, where we evaluate the usability, performance, and error rate of the proposed and existing 3D interaction techniques.  相似文献   

7.
With the widespread use of 3D acquisition devices, there is an increasing need of consolidating captured noisy and sparse point cloud data for accurate representation of the underlying structures. There are numerous algorithms that rely on a variety of assumptions such as local smoothness to tackle this ill‐posed problem. However, such priors lead to loss of important features and geometric detail. Instead, we propose a novel data‐driven approach for point cloud consolidation via a convolutional neural network based technique. Our method takes a sparse and noisy point cloud as input, and produces a dense point cloud accurately representing the underlying surface by resolving ambiguities in geometry. The resulting point set can then be used to reconstruct accurate manifold surfaces and estimate surface properties. To achieve this, we propose a generative neural network architecture that can input and output point clouds, unlocking a powerful set of tools from the deep learning literature. We use this architecture to apply convolutional neural networks to local patches of geometry for high quality and efficient point cloud consolidation. This results in significantly more accurate surfaces, as we illustrate with a diversity of examples and comparisons to the state‐of‐the‐art.  相似文献   

8.
We present a real‐time approach for acquiring 3D objects with high fidelity using hand‐held consumer‐level RGB‐D scanning devices. Existing real‐time reconstruction methods typically do not take the point of interest into account, and thus might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. In addition, any changes in background during scanning, which can often occur in real scenarios, can easily break up the whole reconstruction process. To address these issues, we incorporate visual saliency into a traditional real‐time volumetric fusion pipeline. Salient regions detected from RGB‐D frames suggest user‐intended objects, and by understanding user intentions our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non‐important objects. Experimental results on real‐world scans demonstrate that our system is capable of effectively acquiring geometric information of salient objects in cluttered real‐world scenes, even if the backgrounds are changing.  相似文献   

9.
We propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB‐D camera, without the need of a multiple‐camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB‐D image sequence, and geometric warping fields are found using a state‐of‐the‐art non‐rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi‐scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters.  相似文献   

10.
A mechanism is presented for direct manipulation of 3D objects with a conventional 2D input device, such as a mouse. The user can define and modify a model by graphical interaction on a 3D perspective or parallel projection. A gestural interface technique enables the specification of 3D transformations (translation, rotation and scaling) by 2D pick and drag operations. Interaction is not restricted to single objects but can be applied to compound objects as well. The method described in this paper is an easy-to-understand 3D input technique which does not require any special hardware and is compatible with the designer's mental model of object manipulation.  相似文献   

11.
We present a novel method to reconstruct a fluid's 3D density and motion based on just a single sequence of images. This is rendered possible by using powerful physical priors for this strongly under‐determined problem. More specifically, we propose a novel strategy to infer density updates strongly coupled to previous and current estimates of the flow motion. Additionally, we employ an accurate discretization and depth‐based regularizers to compute stable solutions. Using only one view for the reconstruction reduces the complexity of the capturing setup drastically and could even allow for online video databases or smart‐phone videos as inputs. The reconstructed 3D velocity can then be flexibly utilized, e.g., for re‐simulation, domain modification or guiding purposes. We will demonstrate the capacity of our method with a series of synthetic test cases and the reconstruction of real smoke plumes captured with a Raspberry Pi camera.  相似文献   

12.
Creating a virtual city is demanded for computer games, movies, and urban planning, but it takes a lot of time to create numerous 3D building models. Procedural modeling has become popular in recent years to overcome this issue, but creating a grammar to get a desired output is difficult and time consuming even for expert users. In this paper, we present an interactive tool that allows users to automatically generate such a grammar from a single image of a building. The user selects a photograph and highlights the silhouette of the target building as input to our method. Our pipeline automatically generates the building components, from large‐scale building mass to fine‐scale windows and doors geometry. Each stage of our pipeline combines convolutional neural networks (CNNs) and optimization to select and parameterize procedural grammars that reproduce the building elements of the picture. In the first stage, our method jointly estimates camera parameters and building mass shape. Once known, the building mass enables the rectification of the façades, which are given as input to the second stage that recovers the façade layout. This layout allows us to extract individual windows and doors that are subsequently fed to the last stage of the pipeline that selects procedural grammars for windows and doors. Finally, the grammars are combined to generate a complete procedural building as output. We devise a common methodology to make each stage of this pipeline tractable. This methodology consists in simplifying the input image to match the visual appearance of synthetic training data, and in using optimization to refine the parameters estimated by CNNs. We used our method to generate a variety of procedural models of buildings from existing photographs.  相似文献   

13.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

14.
We present facetons, geometric modeling primitives designed for building architectural models especially effective for a virtual environment where six degrees of freedom input devices are available. A faceton is an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple interaction of faceton, users can easily create 3D architecture models. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B‐rep) and constructive solid geometry, but it is driven by a novel adaptive bounding algorithm and is specifically designed for 3D modeling activities in an immersive virtual environment. We describe the modeling method and our current implementation. The implementation is still experimental but shows potential as a viable alternative to traditional modeling methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Emerging input modalities could facilitate more efficient user interactions with mobile devices. An end-user customization tool based on user-defined context-action rules lets users specify personal, multimodal interaction with smart phones and external appliances. The tool's input modalities include sensor-based, user-trainable free-form gestures; pointing with radio frequency tags; and implicit inputs based on such things as sensors, the Bluetooth environment, and phone platform events. The tool enables user-defined functionality through a blackboard-based context framework enhanced to manage the rule-based application control. Test results on a prototype implemented on a smart phone with real context sources show that rule-based customization helps end users efficiently customize their smart phones and use novel input modalities.  相似文献   

16.
A camera's shutter controls the incoming light that is reaching the camera sensor. Different shutters lead to wildly different results, and are often used as a tool in movies for artistic purpose, e.g., they can indirectly control the effect of motion blur. However, a physical camera is limited to a single shutter setting at any given moment. ShutterApp enables users to define spatio‐temporally‐varying virtual shutters that go beyond the options available in real‐world camera systems. A user provides a sparse set of annotations that define shutter functions at selected locations in key frames. From this input, our solution defines shutter functions for each pixel of the video sequence using a suitable interpolation technique, which are then employed to derive the output video. Our solution performs in real‐time on commodity hardware. Hereby, users can explore different options interactively, leading to a new level of expressiveness without having to rely on specialized hardware or laborious editing.  相似文献   

17.
18.
We discuss spatial selection techniques for three‐dimensional datasets. Such 3D spatial selection is fundamental to exploratory data analysis. While 2D selection is efficient for datasets with explicit shapes and structures, it is less efficient for data without such properties. We first propose a new taxonomy of 3D selection techniques, focusing on the amount of control the user has to define the selection volume. We then describe the 3D spatial selection technique Tangible Brush, which gives manual control over the final selection volume. It combines 2D touch with 6‐DOF 3D tangible input to allow users to perform 3D selections in volumetric data. We use touch input to draw a 2D lasso, extruding it to a 3D selection volume based on the motion of a tangible, spatially‐aware tablet. We describe our approach and present its quantitative and qualitative comparison to state‐of‐the‐art structure‐dependent selection. Our results show that, in addition to being dataset‐independent, Tangible Brush is more accurate than existing dataset‐dependent techniques, thus providing a trade‐off between precision and effort.  相似文献   

19.
We introduce Flexible Live‐Wire, a generalization of the Live‐Wire interactive segmentation technique with floating anchors. In our approach, the user input for Live‐Wire is no longer limited to the setting of pixel‐level anchor nodes, but can use more general anchor sets. These sets can be of any dimension, size, or connectedness. The generality of the approach allows the design of a number of user interactions while providing the same functionality as the traditional Live‐Wire. In particular, we experiment with this new flexibility by designing four novel Live‐Wire interactions based on specific primitives: paint, pinch, probable, and pick anchors. These interactions are only a subset of the possibilities enabled by our generalization. Moreover, we discuss the computational aspects of this approach and provide practical solutions to alleviate any additional overhead. Finally, we illustrate our approach and new interactions through several example segmentations.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号