首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
We present design principles for conceiving tangible user interfaces for the interactive physically-based deformation of 3D models. Based on these design principles, we developed a first prototype using a passive tangible user interface that embodies the 3D model. By associating an arbitrary reference material with the user interface, we convert the displacements of the user interface into forces required by physically-based deformation models. These forces are then applied to the 3D model made out of any material via a physical deformation model. In this way, we compensate for the absence of direct haptic feedback, which allows us to use a force-driven physically-based deformation model. A user study on simple deformations of various metal beams shows that our prototype is usable for deformation with the user interface embodying the virtual beam. Our first results validate our design principles, plus they have a high educational value for mechanical engineering lectures.  相似文献   

2.
This paper presents a novel constraint-based 3D manipulation approach to interactive constraint-based solid modelling. This approach employs a constraint recognition process to automatically recognise assembly relationships and geometric constraints between entities from 3D manipulation. A technique referred to as allowable motion is used to achieve accurate 3D positioning of a solid model by automatically constraining its 3D manipulation without menu interaction. A set of virtual design tools, which can be used to construct constraint-based solid models within a virtual environment, are also supported. These tools have been implemented as functional 3D objects associated with several pre-defined modelling functions to simulate physical tools such as a drilling tool and T-square. They can be directly manipulated by the user, and precisely positioned relative to other solid models through the constraint-based 3D manipulation approach. Their modelling functions can be automatically triggered, depending upon their associated constraints and the user's manipulation manner. A prototype system has been implemented to demonstrate the feasibility of these techniques for model construction and assembly operations.  相似文献   

3.
An interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics-based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two-handed manipulation registered with head-tracked stereo viewing. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand-eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.  相似文献   

4.
In the early stages of 3D design, sketches are used to quickly conceptualize ideas and gain insight into problems and possible solutions. Computer-aided design tools are widely used for 3D modeling and design, but their required precision and 2D mouse and screen-based interface inhibit the flow of ideas. A study was conducted to explore the efficiency of hand tracking and virtual reality (VR) for 3D object manipulations in conceptual design. Based on existing research on conceptual design and hand gestures, an intuitive hand-based interaction model is proposed. An experiment on basic 3D manipulation shows that participants using a simple VR and hand-tracking interface prototype have similar performance to those using a traditional mouse and screen interface. For the improvement of gestural conceptual design interfaces, the relevant issues are identified.  相似文献   

5.
6.
In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application's success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early '70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.  相似文献   

7.
Manipulating and assembling elements in a 3D space is a task which interests a huge number of potential applications whether they deal with real or abstract objects. Direct manipulation techniques in traditional interactive systems use 2D devices and do not allow an easy manipulation of 3D objects. To facilitate user interaction, we have studied direct manipulation techniques in a virtual reality environment. A VR interface is naturally object-oriented and allows the definition of real-world metaphors. Operators can thus work in the virtual world in a similar way to the real world: they perceive the position of objects through the depth cue of stereo view, and can grab and push them in any direction by means of avirtual handuntil they reach their destination. They can put an object on top of another and line it up with other objects. We model the virtual world as ajob-orientedworld which is governed by a few simple rules which facilitate object positioning. In this paper, we describe the design and implementation strategies to obtain a real-time performance on a low-level workstation.  相似文献   

8.
In this paper, we present a three-dimensional user interface for synchronous co-operative work, Spin, which has been designed for multi-user synchronous real-time applications to be used in, for example, meetings and learning situations. Spin is based on a new metaphor of virtual workspace. We have designed an interface, for an office environment, which recreates the three-dimensional elements needed during a meeting and increases the user's scope of interaction. In order to accomplish these objectives, animation and three-dimensional interaction in real time are used to enhance the feeling of collaboration within the three-dimensional workspace. Spin is designed to maintain a maximum amount of information visible. The workspace is created using artificial geometry — as opposed to true three-dimensional geometry — and spatial distortion, a technique that allows all documents and information to be displayed simultaneously while centring the user's focus of attention. Users interact with each other via their respective clones, which are three-dimensional representations displayed in each user's interface, and are animated with user action on shared documents. An appropriate object manipulation system (direct manipulation, 3D devices and specific interaction metaphors) is used to point out and manipulate 3D documents.  相似文献   

9.
将草图技术、自适应技术与传统的虚拟现实技术相结合应用于虚拟教学过程中,通过增强用户和系统之间个性化需求的交互处理来提高系统的智能性和友好性,进而有效提高虚拟教学环境的应用效果.旨在虚拟环境中构建基于草图的自适应用户界面并应用于虚拟教学来满足特定教学需求,着重结合实例分析了基于自适应草图用户界面的虚拟教学环境中的草图上下文处理机制.在上述研究基础上,设计开发了一个虚拟教学原型系统,实验证明该系统在用户体验上有明显的改善.  相似文献   

10.
The paper discusses basic approaches to implementation of a graphical user interface (GUI) as virtual two- and three-dimensional environments for human-computer interaction. A design approach to virtual four-dimensional environments based on special visual effects is proposed. Functional capabilities of the FDC package that implements an environment prototype and principles of user operation are given.  相似文献   

11.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

12.
In this paper, we present a believable interaction mechanism for manipulation multiple objects in ubiquitous/augmented virtual environment. A believable interaction in multimodal framework is defined as a persistent and consistent process according to contextual experiences and common‐senses on the feedbacks. We present a tabletop interface as a quasi‐tangible framework to provide believable processes. An enhanced tabletop interface is designed to support multimodal environment. As an exemplar task, we applied the concept to fast accessing and manipulating distant objects. A set of enhanced manipulation mechanisms is presented for remote manipulations including inertial widgets, transformable tabletop, and proxies. The proposed method is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed technique uses intuitive hand gestures and provides higher level of believability. It can also support other types of accessing techniques such as browsing and manipulation. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
基于虚拟原型的概念设计描述模型V-desModel   总被引:2,自引:0,他引:2  
杨强  郭阳  彭宇行  李思昆 《软件学报》2002,13(4):748-753
传统的概念设计方法由于缺乏真实感的交互手段,难以直观表达设计者的意图.基于虚拟原型的概念设计不仅能为设计者提供逼真的虚拟设计环境,而且充分体现了现代设计的成本低、周期短以及灵活性强等特点.针对概念设计的特点以及虚拟原型的特征分类,提出了基于虚拟原型的概念设计模型V-desModel,其核心是利用产品视图模型描述设计对象,将虚拟特征概念融入视图模型中,并采用可扩展"三维实体-约束图"来描述设计对象之间的约束关系.V-desModel模型能有效地支持基于虚拟原型的概念设计过程,较好地解决了概念设计中产品虚拟原  相似文献   

14.
With the development of human robot interaction technologies, haptic interfaces are widely used for 3D applications to provide the sense of touch. These interfaces have been utilized in medical simulation, virtual assembly and remote manipulation tasks. However, haptic interface design and control are still critical problems to reproduce the highly sensitive touch sense of humans. This paper presents the development and evaluation of a 7-DOF (degree of freedom) haptic interface based on the modified delta mechanism. Firstly, both kinematics and dynamics of the modified mechanism are analyzed and presented. A novel gravity compensation algorithm based on the physical model is proposed and validated in simulation. A haptic controller is proposed based on the forward kinematics and the gravity compensation algorithm. To evaluate the control performance of the haptic interface, a prototype has been implemented. Three kinds of experiments:gravity compensation, static response and force tracking are performed respectively. The experimental results show that the mean error of the gravity compensation is less than 0.7 N and the maximum continuous force along the axis can be up to 6 N. This demonstrates the good performance of the proposed haptic interface.   相似文献   

15.
Immersive visualisation is increasingly being used for comprehensive and rapid analysis of objects in 3D and object dynamic behaviour in 4D. Challenges are therefore presented to provide natural user interaction to enable effortless virtual object manipulation. Presented in this paper is the development and evaluation of an immersive human?Ccomputer interaction system based on stereoscopic viewing and natural hand gestures. For the development, it is based on the integration of a back-projection stereoscopic system for object and hand display, a hybrid inertial and ultrasonic tracking system to provide the absolute positions and orientations of the user??s head and hands, as well as a pair of high degrees-of-freedom data gloves to provide the relative positions and orientations of digit joints and tips on both hands. For the evaluation, it is based on a two-object scene with a virtual cube and a CT (computed tomography) volume created for demonstration of real-time immersive object manipulation. The system is shown to provide a correct user view of objects and hands in 3D with depth, as well as to enable a user to use a number of simple hand gestures to perform basic object manipulation tasks involving selection, release, translation, rotation and scaling. Also included in the evaluation are some quantitative tests of the system performance in terms of speed and latency.  相似文献   

16.
Integral projections are a useful technique in many computer vision problems. In this paper, we present a perceptual interface which allows us to navigate through a virtual 3D world by using the movements of the face of human users. The system applies advanced computer vision techniques to detect, track, and estimate the pose of the user’s head. The core of the proposed approach consists of a face tracker, which is based on the computation, alignment, and analysis of integral projections. This technique provides a robust, accurate, and stable 2D location of the face in each frame of the input video. Then, 3D location and orientation of the head are estimated, using some predefined heuristics. Finally, the resulting 3D pose is transformed into control signals for the navigation in the virtual world. The proposed approach has been implemented and tested in a prototype, which is publicly available. Some experimental results are shown, proving the feasibility of the method. The perceptual interface is fast, stable, and robust to facial expression and illumination conditions.  相似文献   

17.
张全贵  李鑫  王普 《计算机工程与设计》2012,33(6):2472-2475,2521
传统组态软件二维用户界面仿真度低,不能直观反映工业现场生产情况,针对该问题设计并实现了基于X3D的组态软件三维用户界面组态引擎.介绍了三维用户界面组态引擎的系统架构以及使用该引擎进行场景组态时的流程;基于X3D建立了三维场景对象库,并建立对象之间的关联作为场景组态的约束关系,利用其加快场景组态过程;使用XJ3D图形工具包实现了该引擎的原型系统.实验证明该引擎组态简便,使用其组态生成的三维用户界面具有较好的仿真效果.  相似文献   

18.
针对虚拟环境中用户认知负荷较重和资源分配不协调问题,综合分析了人脑认知 活动中信息的显性化表达,提出一种基于分布式认知的信息可视化资源模型。通过计算机感知 虚拟环境中用户动作、行为、任务等信息,依据资源分配方案确定资源和信息之间映射关系, 并以信息表象的形式贮存;通过对信息表象进一步精致化,实现交互界面视觉元素的优化布局; 本文结合眼动追踪设备对VR 系统原型进行可用性评估实验,实验结果表明该可视化模型能够 降低用户认知负荷,改善用户体验。  相似文献   

19.
目的虚拟制造环境中需要复杂精确的3D人机交互。目前的虚拟环境(VE)的主要问题是人在交互过程中的认知和操作负荷太重,交互效率亟需提高。解决此问题的重要途径是提高机器的认知能力。方法本文研究了用户意图的分析和抽取,并建立多通道用户意图理解的算法,以此来提高交互效率。结果结合虚拟装配应用给出了典型意图的实验结果并给予分析。通过实验对多通道意图的可用性和可靠性,以及基于意图系统的实时性进行了评估。实验是虚拟装配空间中用户拾取对象意图的实验。当3维鼠标和对象距离为5 000 mm时,传统系统中操作平均耗时5.344 7 s,而基于意图的系统中平均耗时2.326 6 s。基于意图的系统极大地降低了操作的时间和复杂度。结论采用意图驱动的系统情景转换能在虚拟环境工作中有效地降低人的认知负荷,并能很好地帮助系统开发者进行混成系统的建模和分析,降低开发的复杂度。实践结果表明用户意图理解的多通道模型和算法能极大地提高交互式系统的交互自然性和交互效率。该方法不仅适用于本文所用的虚拟装配系统,对于所有的虚拟环境应用场景都有同样的有效性。  相似文献   

20.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号