首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

2.
The advent of depth cameras has enabled mid-air interactions for shape modeling with bare hands. Typically, these interactions employ a finite set of pre-defined hand gestures to allow users to specify modeling operations in virtual space. However, human interactions in real world shaping processes (such as pottery or sculpting) are complex, iterative, and continuous. In this paper, we show that the expression of user intent in shaping processes can be derived from the geometry of contact between the hand and the manipulated object. Specifically, we describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the pot’s profile to the shape of the user’s hand represented as a point-cloud (PCL). Thus, a user does not need to learn, know, or remember any gestures to interact with our system. Our choice of pottery simplifies the geometric representation, allowing us to systematically study how users use their hands and fingers to express the intent of deformation during a shaping process. Our evaluations demonstrate that it is possible to enable users to express their intent for shape deformation without the need for a fixed set of gestures for clutching and deforming a shape.  相似文献   

3.
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.  相似文献   

4.
In this paper, we demonstrate how a new interactive 3 D desktop metaphor based on two-handed 3 D direct manipulation registered with head-tracked stereo viewing can be applied to the task of constructing animated characters. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3 D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3 D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3 D navigation and object movement, while the right hand, holding a 3 D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coordination is made possible by registering virtual space exactly to physical space, allowing a variety of complex 3 D tasks necessary for constructing 3 D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3 D mouse.  相似文献   

5.
An interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics-based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two-handed manipulation registered with head-tracked stereo viewing. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand-eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.  相似文献   

6.
A solution for interaction using finger tracking in a cubic immersive virtual reality system (or immersive cube) is presented. Rather than using a traditional wand device, users can manipulate objects with fingers of both hands in a close-to-natural manner for moderately complex, general purpose tasks. Our solution couples finger tracking with a real-time physics engine, combined with a heuristic approach for hand manipulation, which is robust to tracker noise and simulation instabilities. A first study has been performed to evaluate our interface, with tasks involving complex manipulations, such as balancing objects while walking in the cube. The user’s finger-tracked manipulation was compared to manipulation with a 6 degree-of-freedom wand (or flystick), as well as with carrying out the same task in the real world. Users were also asked to perform a free task, allowing us to observe their perceived level of presence in the scene. Our results show that our approach provides a feasible interface for immersive cube environments and is perceived by users as being closer to the real experience compared to the wand. However, the wand outperforms direct manipulation in terms of speed and precision. We conclude with a discussion of the results and implications for further research.  相似文献   

7.
This paper proposes an augmented reality content authoring system that enables ordinary users who do not have programming capabilities to easily apply interactive features to virtual objects on a marker via gestures. The purpose of this system is to simplify augmented reality (AR) technology usage for ordinary users, especially parents and preschool children who are unfamiliar with AR technology. The system provides an immersive AR environment with a head-mounted display and recognizes users’ gestures via an RGB-D camera. Users can freely create the AR content that they will be using without any special programming ability simply by connecting virtual objects stored in a database to the system. Following recognition of the marker via the system’s RGB-D camera worn by the user, he/she can apply various interactive features to the marker-based AR content using simple gestures. Interactive features applied to AR content can enlarge, shrink, rotate, and transfer virtual objects with hand gestures. In addition to this gesture-interactive feature, the proposed system also allows for tangible interaction using markers. The AR content that the user edits is stored in a database, and is retrieved whenever the markers are recognized. The results of comparative experiments conducted indicate that the proposed system is easier to use and has a higher interaction satisfaction level than AR environments such as fixed-monitor and touch-based interaction on mobile screens.  相似文献   

8.
The acceptance of virtual environment (VE) technology requires scrupulous optimization of the most basic interactions in order to maximize user performance and provide efficient and enjoyable virtual interfaces. Motivated by insufficient understanding of the human factors design implications of interaction techniques and tools for virtual interfaces, this paper presents results of a formal study that compared two basic interaction metaphors for egocentric direct manipulation in VEs, virtual hand and virtual pointer, in object selection and positioning experiments. The goals of the study were to explore immersive direct manipulation interfaces, compare performance characteristics of interaction techniques based on the metaphors of interest, understand their relative strengths and weaknesses, and derive design guidelines for practical development of VE applications.  相似文献   

9.
This paper describes an immersive system,called 3DIVE,for interactive volume data visualization and exploration inside the CAVE virtual environment.Combining interactive volume rendering and virtual reality provides a netural immersive environment for volumetric data visualization.More advanced data exploration operations,such as object level data manipulation,simulation and analysis ,are supported in 3DIVE by several new techniques,In particular,volume primitives and texture regions ae used for the rendering,manipulation,and collision detection of volumetric objects;and the region-based rendering pipeline is integrated with 3D image filters to provide an image-based mechanism for interactive transfer function design.The system has been recently released as public domain software for CAVE/ImmersaDesk users,and is currently being actively used by various scientific and biomedical visualization projects.  相似文献   

10.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

11.
Traditional display systems usually display 3D objects on static screens (monitor, wall, etc.) and the manipulation of virtual objects by the viewer is usually achieved via indirect tools such as keyboard or mouse. It would be more natural and direct if we display the object onto a handheld surface and manipulate it with our hands as if we were holding the real 3D object. In this paper, we propose a prototype system by projecting the object onto a handheld foam sphere. The aim is to develop an interactive 3D object manipulation and exhibition tool without the viewer having to wear spectacles. In our system, the viewer holds the sphere with his hands and moves it freely. Meanwhile we project well-tailored images onto the sphere to follow its motion, giving the viewer a virtual perception as if the object were sitting inside the sphere and being moved by the viewer. The design goal is to develop a low-cost, real-time, and interactive 3D display tool. An off-the-shelf projector-camera pair is first calibrated via a simple but efficient algorithm. Vision-based methods are proposed to detect the sphere and track its subsequent motion. The projection image is generated based on the projective geometry among the projector, sphere, camera and the viewer. We describe how to allocate the view spot and warp the projection image. We also present the result and the performance evaluation of the system.  相似文献   

12.
In this paper, we propose an approach to tangible augmented reality (AR) based design evaluation of information appliances, which not only exploits the use of tangible objects without hardwired connections to provide better visual immersion and support more tangible interaction, but also facilitates the adoption of a simple and low cost AR environment setup to improve user experience and performance. To enhance the visual immersion, we develop a solution for resolving hand occlusion in which skin color information is exploited with the use of the tangible objects to detect the hand regions properly. To improve the tangible interaction with the sense of touch, we introduce the use of product- and fixture-type objects, which provides the feelings of holding the product in his or her hands and touching buttons with his or her index fingertip in the AR setup. To improve user experience and performance in view of hardware configuration, we devise to adopt a simple and cost-effective AR setup that properly meets guidelines such as viewing size and distance, working posture, viewpoint matching, and camera movement. From experimental results, we found that the AR setup is good to improve the user experience and performance in design evaluation of handheld information appliances. We also found that the tangible interaction combined with the hand occlusion solver in the AR setup is very useful to improve tangible interaction and immersive visualization of virtual products while making the user experience the shapes and functions of the products well and comfortably.  相似文献   

13.
We present an interface for 3D object manipulation in which standard transformation tools are replaced with transient 3D widgets invoked by sketching context‐dependent strokes. The widgets are automatically aligned to axes and planes determined by the user's stroke. Sketched pivot‐points further expand the interaction vocabulary. Using gestural commands, these basic elements can be assembled into dynamic, user‐constructed 3D transformation systems. We supplement precise widget interaction with techniques for coarse object positioning and snapping. Our approach, which is implemented within a broader sketch‐based modeling system, also integrates an underlying “widget history” to enable the fluid transfer of widgets between objects. An evaluation indicates that users familiar with 3D manipulation concepts can be taught how to efficiently use our system in under an hour.  相似文献   

14.
基于双目视觉的人手定位与手势识别系统研究   总被引:1,自引:0,他引:1  
提出了一种新的人手特征点提取方法,该方法将人手的质心作为匹配点,根据双目视觉定位数学模型计算目标位置信息,同时通过图像分割获取人手轮廓,利用轮廓凸包点特征来识别不同手势.在此基础上,研究设计了一种光学人手定位与手势识别系统,该系统在实时定位空间人手三维位置的同时,能够识别出相应的手势,可将其作为虚拟手的驱动接口,实现对虚拟物体的抓取、移动和释放操作.  相似文献   

15.
Modeling tools typically have their own interaction methods for combining virtual objects. For realistic composition in 3D space, many researchers from the fields of virtual and augmented reality have been trying to develop intuitive interactive techniques using novel interfaces. However, many modeling applications require a long learning time for novice users because of unmanageable interfaces. In this paper, we propose two-handed tangible augmented reality interaction techniques that provide an easy-to-learn and natural combination method using simple augmented blocks. We have designed a novel interface called the cubical user interface, which has two tangible cubes that are tracked by marker tracking. Using the interface, we suggest two types of interactions based on familiar metaphors from real object assembly. The first, the screw-driving method, recognizes the user??s rotation gestures and allows them to screw virtual objects together. The second, the block-assembly method, adds objects based on their direction and position relative to predefined structures. We evaluate the proposed methods in detail with a user experiment that compares the different methods.  相似文献   

16.
We propose a 3D interaction and autostereoscopic display system that use gesture recognition, which can manipulate virtual objects in the scene directly by hand gestures and can display objects in 3D stereoscopy. The system consists of a gesture recognition and manipulation part as well as an autostereoscopic display as an interactive display part. To manipulate the 3D virtual scene, a gesture recognition algorithm is proposed, which use spatial‐temporal sequences of feature vectors to match predefined gestures. To get smooth 3D visualization, we utilize the programmable graphics pipeline in graphic processing unit to accelerate data processing. We develop a prototype system for 3D virtual exhibition. The prototype system reaches frame rates of 60 fps and operates efficiently with a mean recognition accuracy of 90%.  相似文献   

17.
Humans excel in manipulation tasks, a basic skill for our survival and a key feature in our manmade world of artefacts and devices. In this work, we study how humans manipulate simple daily objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. Human demonstrations of predefined object manipulation tasks are recorded from both the human hand and object points of view. The multimodal data acquisition system records human gaze, hand and fingers 6D pose, finger flexure, tactile forces distributed on the inside of the hand, colour images and stereo depth map, and also object 6D pose and object tactile forces using instrumented objects. From the acquired data, relevant features are detected concerning motion patterns, tactile forces and hand-object states. This will enable modelling a class of tasks from sets of repeated demonstrations of the same task, so that a generalised probabilistic representation is derived to be used for task planning in artificial systems. An object centred probabilistic volumetric model is proposed to fuse the multimodal data and map contact regions, gaze, and tactile forces during stable grasps. This model is refined by segmenting the volume into components approximated by superquadrics, and overlaying the contact points used taking into account the task context. Results show that the features extracted are sufficient to distinguish key patterns that characterise each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). The framework presented retains the relevant data from human demonstrations, concerning both the manipulation and object characteristics, to be used by future grasp planning in artificial systems performing autonomous grasping.  相似文献   

18.
Virtual Reality (VR) provides immersive visualization and intuitive interaction. The VR is used to enable any biomedical profession to develop a deep learning (DL) model for image classification. The Deep Neural Network (DNN) models can be used as a powerful tool for data analysis, but they are also challenging to understand and develop. To make deep learning more convenient and fast operation, it have established a landscape of DNN development environment based on virtual reality. In this environment, users can move concrete objects only to build neural networks with their own hands. It automatically transforms these configurations into a trainable model and reports on the real-time test dataset. In addition to realizing the insights users are developing into DNN models, it has also visually enriched the virtual environmental landscape objects with the parts of the model. In this way bridge the gap between professionals in different disciplines, providing a new perspective on the model analysis and data interaction. This system further demonstrates that learning and visualization technologies developed in Shenzhen can benefit from integrating virtual reality.  相似文献   

19.
A new vision-based framework and system for human–computer interaction in an Augmented Reality environment is presented in this article. The system allows the users to interact with computer-generated virtual objects using their hands directly. With an efficient color segmentation algorithm, the system is adaptable to different light conditions and backgrounds. It is also suitable for real-time applications. The dominant features on the palm are detected and tracked to estimate the camera pose. After the camera pose relative to the user's hand has been reconstructed, 3D virtual objects can be augmented naturally onto the palm for the user to inspect and manipulate. With off-the-shelf web camera and computer, natural bare-hand based interactions with 2D and 3D virtual objects can be achieved with low cost.  相似文献   

20.
This paper reports on the utility of gestures and speech to manipulate graphic objects. In the experiment described herein, three different populations of subjects were asked to communicate with a computer using either speech alone, gestures alone, or both. The task was the manipulation of a three-dimensional cube on the screen. They were asked to assume that the computer could see their hands, hear their voices, and understand their gestures and speech as well as a human could. A gesture classification scheme was developed to analyse the gestures of the subjects. A primary objective of the classification scheme was to determine whether common features would be found among the gestures of different users and classes of users. The collected data show a surprising degree of commonality among subjects in the use of gestures as well as speech. In addition to the uniformity of the observed manipulations, subjects expressed a preference for a combined gesture/speech interface. Furthermore, all subjects easily completed the simulated object manipulation tasks.The results of this research, and of future experiments of this type, can be applied to develop a gesture-based or gesture/speech-based system which enables computer users to manipulate graphic objects using easily learned and intuitive gestures to perform spatial tasks. Such tasks might include editing a three-dimensional rendering, controlling the operation of vehicles or operating virtual tools in three dimensions, or assembling an object from components. Knowledge about how people intuitively use gestures to communicate with computers provides the basis for future development of gesture-based input devices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号