首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Quantitative assessment is made of using two display techniques, providing two different levels of depth perception, in conjunction with a haptic device for manipulating 3D objects in virtual environments. The two display techniques are 2D display, and interactive 3D stereoscopic virtual holography display on a zSpace tablet. Experiments were conducted, by several users of different ages and computer training. The experiments involved selected pointing and manipulation tasks. The speed of performing the tasks using the two display techniques were recorded. Statistical analysis of the data is presented. As expected, the use of interactive 3D stereoscopic display resulted in faster performance of the tasks. The improvement in performance was particularly noticeable for the cases wherein the subjects needed to manipulate the haptic arm to reach objects/targets at different depths, and also when the objects/targets were occluded partially by the obstacles.  相似文献   

2.
In this paper, we demonstrate how a new interactive 3 D desktop metaphor based on two-handed 3 D direct manipulation registered with head-tracked stereo viewing can be applied to the task of constructing animated characters. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3 D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3 D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3 D navigation and object movement, while the right hand, holding a 3 D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coordination is made possible by registering virtual space exactly to physical space, allowing a variety of complex 3 D tasks necessary for constructing 3 D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3 D mouse.  相似文献   

3.
Collaborative virtual environments (CVEs) are 3D spaces in which users share virtual objects, communicate, and work together. To collaborate efficiently, users must develop a common representation of their shared virtual space. In this work, we investigated spatial communication in virtual environments. In order to perform an object co-manipulation task, the users must be able to communicate and exchange spatial information, such as object position, in a virtual environment. We conducted an experiment in which we manipulated the contents of the shared virtual space to understand how users verbally construct a common spatial representation of their environment. Forty-four students participated in the experiment to assess the influence of contextual objects on spatial communication and sharing of viewpoints. The participants were asked to perform in dyads an object co-manipulation task. The results show that the presence of a contextual object such as fixed and lateralized visual landmarks in the virtual environment positively influences the way male operators collaborate to perform this task. These results allow us to provide some design recommendations for CVEs for object manipulation tasks.  相似文献   

4.
一个面向对象系统的三维可视范型及实现   总被引:3,自引:0,他引:3  
华庆一 《计算机学报》1997,20(9):775-781
本文描述一个在三维虚拟空间内表示面向系统的可视范型,强调利用三维交互式图形来表示对象间的关系。其特点是有效利用屏幕空间且减轻认知负担。该范围使用一个三维图形工具箱TOAST提供的三维交互式技术和对象实现在SGI图形工作站上。  相似文献   

5.
This paper presents a technique for computing multiresolution shape models of 3D objects acquired as clouds of 3D points. The procedure is fully automated and is able to compute approximations for any object, overcoming sampling irregularity if present (sampling irregularity is a common feature of most 3D acquisition techniques; a typical example is stereo vision). The method described here starts by computing an intermediate mesh that meets the subdivision connectivity requirement needed to allow the computation of the wavelet transform. The mesh is then adjusted to the 3D input data using an iterative deformation process. Finally, a spherical wavelet transform is computed to obtain the object's 3D multiresolution model. This paper shows a number of real objects acquired with different techniques, including hand-held 3D digitizers. The paper also gives some examples of how multiresolution representations can be used in tasks such as acquisition noise filtering, mesh simplification and shape labelling.  相似文献   

6.
阐述通过自动寻找虚拟相机的有效位置以增强3D直接操作过程的技术.当前许多直接操作3D几何形体的尖端技术都强烈地依赖于视角,因此需要用户在操作过程中确定视点的坐标.在某些情形下,这个过程可以自动化.这意味着系统可以自动避免退化位置,而在退化情形,平移与旋转操作是难以实现的.系统还能选择视点和视角,使得被操作的物体可见,确保不被其他物体遮挡.  相似文献   

7.
基于深度学习的三维模型分类方法大都面向特定的具体任务,在面向三维模型多样化分类任务时表现不佳,泛用性不足。为此,提出了一种通用的端到端的深度集成学习模型E2E-DEL(end-to-end deep ensemble learning),由多个初级学习器和一个集成学习器组成,可以自动学习复杂三维模型的复合特征信息;并使用层次迭代式学习策略,综合考量不同层次网络的特征学习能力,合理平衡各个初级学习器的子特征学习和集成学习器的集成特征学习效果,自适应于三维模型多样化分类任务。基于此,设计了一种面向多视图的深度集成学习网络MV-DEL(multi-view deep ensemble learning),应用于一般性、细粒度、零样本三种不同类型的三维模型分类任务中。在多个公开数据集上的实验验证了该方法具有良好的泛化性与普适性。  相似文献   

8.
Mobile robotics has achieved notable progress, however, to increase the complexity of the tasks that mobile robots can perform in natural environments, we need to provide them with a greater semantic understanding of their surrounding. In particular, identifying indoor scenes, such as an Office or a Kitchen, is a highly valuable perceptual ability for an indoor mobile robot, and in this paper we propose a new technique to achieve this goal. As a distinguishing feature, we use common objects, such as Doors or furniture, as a key intermediate representation to recognize indoor scenes. We frame our method as a generative probabilistic hierarchical model, where we use object category classifiers to associate low-level visual features to objects, and contextual relations to associate objects to scenes. The inherent semantic interpretation of common objects allows us to use rich sources of online data to populate the probabilistic terms of our model. In contrast to alternative computer vision based methods, we boost performance by exploiting the embedded and dynamic nature of a mobile robot. In particular, we increase detection accuracy and efficiency by using a 3D range sensor that allows us to implement a focus of attention mechanism based on geometric and structural information. Furthermore, we use concepts from information theory to propose an adaptive scheme that limits computational load by selectively guiding the search for informative objects. The operation of this scheme is facilitated by the dynamic nature of a mobile robot that is constantly changing its field of view. We test our approach using real data captured by a mobile robot navigating in Office and home environments. Our results indicate that the proposed approach outperforms several state-of-the-art techniques for scene recognition.  相似文献   

9.
Learning-based 3D generation is a popular research field in computer graphics. Recently, some works adapted implicit function defined by a neural network to represent 3D objects and have become the current state-of-the-art. However, training the network requires precise ground truth 3D data and heavy pre-processing, which is unrealistic. To tackle this problem, we propose the DFR, a differentiable process for rendering implicit function representation of 3D objects into 2D images. Briefly, our method is to simulate the physical imaging process by casting multiple rays through the image plane to the function space, aggregating all information along with each ray, and performing a differentiable shading according to every ray's state. Some strategies are also proposed to optimize the rendering pipeline, making it efficient both in time and memory to support training a network. With DFR, we can perform many 3D modeling tasks with only 2D supervision. We conduct several experiments for various applications. The quantitative and qualitative evaluations both demonstrate the effectiveness of our method.  相似文献   

10.
Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi‐layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance.  相似文献   

11.
An interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics-based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two-handed manipulation registered with head-tracked stereo viewing. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand-eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.  相似文献   

12.
In this paper, we describe the results of an experimental study whose objective was twofold: (1) comparing three navigation aids that help users perform wayfinding tasks in desktop virtual environments (VEs) by pointing out the location of objects or places; (2) evaluating the effects of user experience with 3D desktop VEs on their effectiveness with the considered navigation aids. In particular, we compared navigation performance (in terms of total time to complete an informed search task) of 48 users divided into two groups: subjects in one group had experience in navigating 3D VEs while subjects in the other group did not. The experiment comprised four conditions that differed for the navigation aid that was employed. The first and the second condition, respectively, exploited 3D and 2D arrows to point towards objects that users had to reach; in the third condition, a radar metaphor was employed to show the location of objects in the VE; the fourth condition was a control condition with no location-pointing navigation aid available. The search task was performed both in a VE representing an outdoor geographic area and in an abstract VE that did not resemble any familiar environment. For each VE, users were also asked to order the four conditions according to their preference. Results show that the navigation aid based on 3D arrows outperformed (both in terms of user performance and user preference) the others, except in the case when it was used by experienced users in the geographic VE. In that case, it was as effective as the others. Finally, in the geographic VE, experienced users took significantly less time than inexperienced users to perform the informed search, while in the abstract VE the difference was significant only in the control and the radar conditions. From a more general perspective, our study highlights the need to take into specific consideration user experience in navigating VEs when designing navigation aids and evaluating their effectiveness.  相似文献   

13.
To answer the question: “what is 3D good for?” we reviewed the body of literature concerning the performance implications of stereoscopic 3D (S3D) displays versus non-stereo (2D or monoscopic) displays. We summarized results of over 160 publications describing over 180 experiments spanning 51 years of research in various fields including human factors psychology/engineering, human–computer interaction, vision science, visualization, and medicine. Publications were included if they described at least one task with a performance-based experimental evaluation of an S3D display versus a non-stereo display under comparable viewing conditions. We classified each study according to the experimental task(s) of primary interest: (a) judgments of positions and/or distances; (b) finding, identifying, or classifying objects; (c) spatial manipulations of real or virtual objects; (d) navigation; (e) spatial understanding, memory, or recall and (f) learning, training, or planning. We found that S3D display viewing improved performance over traditional non-stereo (2D) displays in 60% of the reported experiments. In 15% of the experiments, S3D either showed a marginal benefit or the results were mixed or unclear. In 25% of experiments, S3D displays offered no benefit over non-stereo 2D viewing (and in some rare cases, harmed performance). From this review, stereoscopic 3D displays were found to be most useful for tasks involving the manipulation of objects and for finding/identifying/classifying objects or imagery. We examine instances where S3D did not support superior task performance. We discuss the implications of our findings with regard to various fields of research concerning stereoscopic displays within the context of the investigated tasks.  相似文献   

14.
3D representation and storytelling are two powerful means for educating students while engaging them. This paper describes a novel software architecture that couples them for creating engaging linear narrations that can be shared on the web. The architecture takes advantage of a previous work focused on the semantic annotation of 3D worlds that allows the users to go beyond the simple navigation of 3D objects, permitting to retrieve them with different search tools. The novelty of our architecture is that authors don’t have to build stories from scratch, but can take advantage of the crowdsourced effort of all the users accessing the platform, which can contribute providing assets or annotating objects. At our best knowledge no existing workflow includes the collaborative annotation of 3D worlds and the possibility to create stories on the top of it. Another feature of our design is the possibility for users to switch from and to any of the available activities during the same session. This integration offers the possibility to define a complex user experience, even starting from a simple linear narration. The visual interfaces of the system will be described in relation to a case study focused on culture heritage.  相似文献   

15.
This paper gives a method of quantifying small visual differences between 3D mesh models with conforming topology, based on the theory of strain fields. Strain field is a geometric quantity in elasticity which is used to describe the deformation of elastomer. In this paper we consider the 3D models as objects with elasticity. The further demonstrations are provided: the first is intended to give the reader a visual impression of how our measure works in practice; and the second is to give readers a visua...  相似文献   

16.
针对传统3D建模技术无法满足在脱离专业测量工具的情况下,实时创建出与用户所处房间等比例尺寸的3D模型的需求。提出一种基于陀螺仪传感器结合改进粒子群算法计算房间3D模型尺寸与镜头位置的动态3D建模技术,该技术可以实现实时房间等比例建模,使用者可以预览到整个房间所有方位的装修效果,让用户对房间整体装修效果有直观的印象,操作方便且实时性强。实验结果表明,改进粒子群算法的动态3D实时建模技术解决了国内传统3D建模技术测量不精确的缺陷,具有一定的理论和实际意义。  相似文献   

17.
In next-generation virtual 3D simulation, training, and entertainment environments, intelligent visualization interfaces must respond to user-specified viewing requests so users can follow salient points of the action and monitor the relative locations of objects. Users should be able to indicate which object(s) to view, how each should be viewed, what cinematic style and pace to employ, and how to respond when a single satisfactory view is not possible. When constraints fail, weak constraints can be relaxed or multi-shot solutions can be displayed in sequence or as composite shots with simultaneous viewports. To address these issues, we have developed ConstraintCam, a real-time camera visualization interface for dynamic 3D worlds.  相似文献   

18.
借鉴真实世界的认知心理学原理,将虚拟场景的可视表达和语义信息结合起来共同服务于用户的交互过程,多种3D交互技术被融合在一个统一的交互框架内,使复杂虚拟环境中的3D用户界面更容易被用户理解和使用.通过增强场景图的语义处理能力,建立支持高层语义的3D用户界面体系结构,3D交互系统不仅在几何层上而且还能在语义层上支持交互任务的执行.最后介绍了一个应用实例.  相似文献   

19.
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.  相似文献   

20.
The intention of the strategy proposed in this paper is to solve the object retrieval problem in highly complex scenes using 3D information. In the worst case scenario the complexity of the scene includes several objects with irregular or free-form shapes, viewed from any direction, which are self-occluded or partially occluded by other objects with which they are in contact and whose appearance is uniform in intensity/color. This paper introduces and analyzes a new 3D recognition/pose strategy based on DGI (Depth Gradient Images) models. After comparing it with current representative techniques, we can affirm that DGI has very interesting prospects.The DGI representation synthesizes both surface and contour information, thus avoiding restrictions concerning the layout and visibility of the objects in the scene. This paper first explains the key concepts of the DGI representation and shows the main properties of this method in comparison to a set of known techniques. The performance of this strategy in real scenes is then reported. Details are also presented of a wide set of experimental tests, including results under occlusion, performance with injected noise and experiments with cluttered scenes of a high level of complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号