首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. The Perceptive Workbench endeavors to create a spontaneous and unimpeded interface between the physical and virtual worlds. Its vision-based methods for interaction constitute an alternative to wired input devices and tethered tracking. Objects are recognized and tracked when placed on the display surface. By using multiple infrared light sources, the object's 3-D shape can be captured and inserted into the virtual interface. This ability permits spontaneity, since either preloaded objects or those objects selected at run-time by the user can become physical icons. Integrated into the same vision-based interface is the ability to identify 3-D hand position, pointing direction, and sweeping arm gestures. Such gestures can enhance selection, manipulation, and navigation tasks. The Perceptive Workbench has been used for a variety of applications, including augmented reality gaming and terrain navigation. This paper focuses on the techniques used in implementing the Perceptive Workbench and the system's performance.  相似文献   

2.
This article proposes a 3-dimensional (3D) vision-based ambient user interface as an interaction metaphor that exploits a user's personal space and its dynamic gestures. In human-computer interaction, to provide natural interactions with a system, a user interface should not be a bulky or complicated device. In this regard, the proposed ambient user interface utilizes an invisible personal space to remove cumbersome devices where the invisible personal space is virtually augmented through exploiting 3D vision techniques. For natural interactions with the user's dynamic gestures, the user of interest is extracted from the image sequences by the proposed user segmentation method. This method can retrieve 3D information from the segmented user image through 3D vision techniques and a multiview camera. With the retrieved 3D information of the user, a set of 3D boxes (SpaceSensor) can be constructed and augmented around the user; then the user can interact with the system by touching the augmented SpaceSensor. In the user's dynamic gesture tracking, the computational complexity of SpaceSensor is relatively lower than that of conventional 2-dimensional vision-based gesture tracking techniques, because the touched positions of SpaceSensor are tracked. According to the experimental results, the proposed ambient user interface can be applied to various systems that require real-time user's dynamic gestures for their interactions both in real and virtual environments.  相似文献   

3.
This paper proposes an augmented reality content authoring system that enables ordinary users who do not have programming capabilities to easily apply interactive features to virtual objects on a marker via gestures. The purpose of this system is to simplify augmented reality (AR) technology usage for ordinary users, especially parents and preschool children who are unfamiliar with AR technology. The system provides an immersive AR environment with a head-mounted display and recognizes users’ gestures via an RGB-D camera. Users can freely create the AR content that they will be using without any special programming ability simply by connecting virtual objects stored in a database to the system. Following recognition of the marker via the system’s RGB-D camera worn by the user, he/she can apply various interactive features to the marker-based AR content using simple gestures. Interactive features applied to AR content can enlarge, shrink, rotate, and transfer virtual objects with hand gestures. In addition to this gesture-interactive feature, the proposed system also allows for tangible interaction using markers. The AR content that the user edits is stored in a database, and is retrieved whenever the markers are recognized. The results of comparative experiments conducted indicate that the proposed system is easier to use and has a higher interaction satisfaction level than AR environments such as fixed-monitor and touch-based interaction on mobile screens.  相似文献   

4.
Modeling tools typically have their own interaction methods for combining virtual objects. For realistic composition in 3D space, many researchers from the fields of virtual and augmented reality have been trying to develop intuitive interactive techniques using novel interfaces. However, many modeling applications require a long learning time for novice users because of unmanageable interfaces. In this paper, we propose two-handed tangible augmented reality interaction techniques that provide an easy-to-learn and natural combination method using simple augmented blocks. We have designed a novel interface called the cubical user interface, which has two tangible cubes that are tracked by marker tracking. Using the interface, we suggest two types of interactions based on familiar metaphors from real object assembly. The first, the screw-driving method, recognizes the user??s rotation gestures and allows them to screw virtual objects together. The second, the block-assembly method, adds objects based on their direction and position relative to predefined structures. We evaluate the proposed methods in detail with a user experiment that compares the different methods.  相似文献   

5.
We propose a 3D interaction and autostereoscopic display system that use gesture recognition, which can manipulate virtual objects in the scene directly by hand gestures and can display objects in 3D stereoscopy. The system consists of a gesture recognition and manipulation part as well as an autostereoscopic display as an interactive display part. To manipulate the 3D virtual scene, a gesture recognition algorithm is proposed, which use spatial‐temporal sequences of feature vectors to match predefined gestures. To get smooth 3D visualization, we utilize the programmable graphics pipeline in graphic processing unit to accelerate data processing. We develop a prototype system for 3D virtual exhibition. The prototype system reaches frame rates of 60 fps and operates efficiently with a mean recognition accuracy of 90%.  相似文献   

6.
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.  相似文献   

7.
Immersive visualisation is increasingly being used for comprehensive and rapid analysis of objects in 3D and object dynamic behaviour in 4D. Challenges are therefore presented to provide natural user interaction to enable effortless virtual object manipulation. Presented in this paper is the development and evaluation of an immersive human?Ccomputer interaction system based on stereoscopic viewing and natural hand gestures. For the development, it is based on the integration of a back-projection stereoscopic system for object and hand display, a hybrid inertial and ultrasonic tracking system to provide the absolute positions and orientations of the user??s head and hands, as well as a pair of high degrees-of-freedom data gloves to provide the relative positions and orientations of digit joints and tips on both hands. For the evaluation, it is based on a two-object scene with a virtual cube and a CT (computed tomography) volume created for demonstration of real-time immersive object manipulation. The system is shown to provide a correct user view of objects and hands in 3D with depth, as well as to enable a user to use a number of simple hand gestures to perform basic object manipulation tasks involving selection, release, translation, rotation and scaling. Also included in the evaluation are some quantitative tests of the system performance in terms of speed and latency.  相似文献   

8.
In this paper, we propose an interactive designing method and a system based on it to create 3D objects and 2D images. This system consists of two subsystems for virtual sculpting to create a 3D shape and virtual printing to produce a picture with a printing block. In the virtual sculpting subsystem, a user can form solid objects with curved surfaces as if sculpting them. The user operates virtual chisels, and can remove or attach arbitrary shapes of ellipsoids or cubes from or to the workpiece. A 3D object generated by virtual sculpting looks like a real wooden sculpture. If using a board as a workpiece, a user can generate a virtual printing block. In the virtual printing subsystem, a user can synthesize a woodcut printing image from the virtual printing block mentioned above, a virtual paper sheet, and a printing brush. The user can synthesize a realistic woodcut print with a procedure similar to the actual woodcut printing.  相似文献   

9.
A new vision-based framework and system for human–computer interaction in an Augmented Reality environment is presented in this article. The system allows the users to interact with computer-generated virtual objects using their hands directly. With an efficient color segmentation algorithm, the system is adaptable to different light conditions and backgrounds. It is also suitable for real-time applications. The dominant features on the palm are detected and tracked to estimate the camera pose. After the camera pose relative to the user's hand has been reconstructed, 3D virtual objects can be augmented naturally onto the palm for the user to inspect and manipulate. With off-the-shelf web camera and computer, natural bare-hand based interactions with 2D and 3D virtual objects can be achieved with low cost.  相似文献   

10.
NAVIG: augmented reality guidance system for the visually impaired   总被引:1,自引:0,他引:1  
Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.  相似文献   

11.
In projection-based Virtual Reality (VR) systems, typically only one headtracked user views stereo images rendered from the correct view position. For other users, who are presented a distorted image, moving with the first user's head motion, it is difficult to correctly view and interact with 3D objects in the virtual environment. In close-range VR systems, such as the Virtual Workbench, distortion effects are especially large because objects are within close range and users are relatively far apart. On these systems, multi-user collaboration proves to be difficult. In this paper, we analyze the problem and describe a novel, easy to implement method to prevent and reduce image distortion and its negative effects on close-range interaction task performance. First, our method combines a shared camera model and view distortion compensation. It minimizes the overall distortion for each user, while important user-personal objects such as interaction cursors, rays and controls remain distortion-free. Second, our method retains co-location for interaction techniques to make interaction more consistent. We performed a user experiment on our Virtual Workbench to analyze user performance under distorted view conditions with and without the use of our method. Our findings demonstrate the negative impact of view distortion on task performance and the positive effect our method introduces. This indicates that our method can enhance the multi-user collaboration experience on close-range, projection-based VR systems.  相似文献   

12.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

13.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

14.
Multi-user virtual reality systems enable natural collaboration in shared virtual worlds. Users can talk to each other, gesture and point into the virtual scenery as if it were real. As in reality, referring to objects by pointing results often in a situation whereon objects are occluded from the other users' viewpoints. While in reality this problem can only be solved by adapting the viewing position, specialized individual views of the shared virtual scene enable various other solutions. As one such solution we propose show-through techniques to make sure that the objects one is pointing to can always be seen by others. We first study the impact of such augmented viewing techniques on the spatial understanding of the scene, the rapidity of mutual information exchange as well as the proxemic behavior of users. To this end we conducted a user study in a co-located stereoscopic multi-user setup. Our study revealed advantages for show-through techniques in terms of comfort, user acceptance and compliance to social protocols while spatial understanding and mutual information exchange is retained. Motivated by these results we further analyze whether show-through techniques may also be beneficial in distributed virtual environments. We investigated a distributed setup for two users, each participant having its own display screen and a minimalist avatar representation for each participant. In such a configuration there is a lack of mutual awareness, which hinders the understanding of each other's pointing gestures and decreases the relevance of social protocols in terms of proxemic behavior. Nevertheless, we found that show-through techniques can improve collaborative interaction tasks even in such situations.  相似文献   

15.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

16.
The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.  相似文献   

17.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

18.
Many interface devices for virtual reality provide full immersion inside the virtual environment. This is appropriate for numerous applications that emphasize navigating through a virtual space. However, a large class of problems exists for which navigation is not the critical issue. Rather, these applications demand a fine-granularity visualization and interaction with virtual objects and scenes. This applies to a host of other applications typically performed on a desktop, table or workbench. Responsive Workbench technology offers a new way to develop virtual environments for this rather sizable class of applications. The Responsive Workbench operates by projecting a computer-generated, stereoscopic image off a mirror and through a table surface. Using stereoscopic shuttered glasses, users observe a 3D image displayed above the tabletop. By tracking the group leader's head and hand movements, the Responsive Workbench permits changing the view angle and interacting with the 3D scene. Other group members observe the scene as the group leader manipulates it, facilitating communication among observers. Typical methods for interacting with virtual objects on the workbench include speech recognition, gesture recognition and a simulated laser pointer (stylus). This article features Responsive Workbench applications from four institutions that have pioneered this technology. The four applications are: visualization, situational awareness, collaborative production modeling and planning, and a virtual windtunnel  相似文献   

19.
Many casually taken ‘tourist’ photographs comprise of architectural objects like houses, buildings, etc. Reconstructing such 3D scenes captured in a single photograph is a very challenging problem. We propose a novel approach to reconstruct such architectural scenes with minimal and simple user interaction, with the goal of providing 3D navigational capability to an image rather than acquiring accurate geometric detail. Our system, Peek‐in‐the‐Pic, is based on a sketch‐based geometry reconstruction paradigm. Given an image, the user simply traces out objects from it. Our system regards these as perspective line drawings, automatically completes them and reconstructs geometry from them. We make basic assumptions about the structure of traced objects and provide simple gestures for placing additional constraints. We also provide a simple sketching tool to progressively complete parts of the reconstructed buildings that are not visible in the image and cannot be automatically completed. Finally, we fill holes created in the original image when reconstructed buildings are removed from it, by automatic texture synthesis. Users can spend more time using interactive texture synthesis for further refining the image. Thus, instead of looking at flat images, a user can fly through them after some simple processing. Minimal manual work, ease of use and interactivity are the salient features of our approach.  相似文献   

20.
In this paper, we present a believable interaction mechanism for manipulation multiple objects in ubiquitous/augmented virtual environment. A believable interaction in multimodal framework is defined as a persistent and consistent process according to contextual experiences and common‐senses on the feedbacks. We present a tabletop interface as a quasi‐tangible framework to provide believable processes. An enhanced tabletop interface is designed to support multimodal environment. As an exemplar task, we applied the concept to fast accessing and manipulating distant objects. A set of enhanced manipulation mechanisms is presented for remote manipulations including inertial widgets, transformable tabletop, and proxies. The proposed method is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed technique uses intuitive hand gestures and provides higher level of believability. It can also support other types of accessing techniques such as browsing and manipulation. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号