首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Today, virtual reality (VR) systems are widely available through low-cost devices such as Oculus Rift and HTC Vive. Although VR technology has so far been centered on entertainment, there is a growing interest from developers, technology companies, and consumers to evaluate it in a wider variety of contexts. This paper explores the effectiveness of visualizing and interacting with three-dimensional graphs in VR in comparison with the traditional approach. In particular, we present an empirical evaluation study for exploring and interacting with three-dimensional graphs using Oculus Rift and Leap Motion. We designed several interfaces exploiting the natural user interface in a VR environment and compared them with traditional mouse–keyboard and joypad configurations. Our evaluation suggests that, although these upcoming VR technologies are more challenging than more traditional ones, they facilitate user involvement during graph interaction and visualization tasks, given the enjoyable experience elicited when combining gesture-based interfaces and VR.  相似文献   

2.
The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.  相似文献   

3.
The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures. We evaluate uWave using a large gesture library with over 4000 samples for eight gesture patterns collected from eight users over one month. uWave achieves 98.6% accuracy, competitive with statistical methods that require significantly more training samples. We also present applications of uWave in gesture-based user authentication and interaction with 3D mobile user interfaces. In particular, we report a series of user studies that evaluates the feasibility and usability of lightweight user authentication. Our evaluation shows both the strength and limitations of gesture-based user authentication.  相似文献   

4.
Complex virtual human representation provides more natural interaction and communication among participants in networked virtual environments, hence it is expected to increase the sense of being together within the same virtual world. We present a flexible framework for the integration of virtual humans in networked collaborative virtual environments. A modular architecture allows flexible representation and control of the virtual humans, whether they are controlled by a physical user using all sorts of tracking and other devices, or by an intelligent control program turning them into autonomous actors. The modularity of the system allows for fairly easy extensions and integration with new techniques making it interesting also as a testbed for various domains from “classic” VR to psychological experiments. We present results in terms of functionalities, example applications and measurements of performance and network traffic with an increasing number of participants in the simulation.  相似文献   

5.
We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls.  相似文献   

6.
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.  相似文献   

7.
We propose a framework with a flexible architecture that have been designed and implemented for collaborative interaction of users, to be applied in massive applications through the Web. We introduce the concept of interperception and use technologies as massive virtual environments and teleoperation for the creation of environments (mixing virtual and real ones) in order to promote accessibility and transparency in the interaction between people, and between people and animate devices (such as robots) through the Web. Experiments with massive games, with interactive applications in digital television, with users and robots interacting in virtual and real versions of museums and cultural centers are presented to validate our proposal.  相似文献   

8.
In recent years, consumers have witnessed a technological revolution that has delivered more-realistic experiences in their own homes through high-definition, stereoscopic televisions and natural, gesture-based video game consoles. Although these experiences are more realistic, offering higher levels of fidelity, it is not clear how the increased display and interaction aspects of fidelity impact the user experience. Since immersive virtual reality (VR) allows us to achieve very high levels of fidelity, we designed and conducted a study that used a six-sided CAVE to evaluate display fidelity and interaction fidelity independently, at extremely high and low levels, for a VR first-person shooter (FPS) game. Our goal was to gain a better understanding of the effects of fidelity on the user in a complex, performance-intensive context. The results of our study indicate that both display and interaction fidelity significantly affect strategy and performance, as well as subjective judgments of presence, engagement, and usability. In particular, performance results were strongly in favor of two conditions: low-display, low-interaction fidelity (representative of traditional FPS games) and high-display, high-interaction fidelity (similar to the real world).  相似文献   

9.
Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.  相似文献   

10.
Accurately understanding a user’s intention is often essential to the success of any interactive system. An information retrieval system, for example, should address the vocabulary problem (Furnas et al., 1987) to accommodate different query terms users may choose. A system that supports natural user interaction (e.g., full-body game and immersive virtual reality) must recognize gestures that are chosen by users for an action. This article reports an experimental study on the gesture choice for tasks in three application domains. We found that the chance for users to produce the same gesture for a given task is below 0.355 on average, and offering a set of gesture candidates can improve the agreement score. We discuss the characteristics of those tasks that exhibit the gesture disagreement problem and those tasks that do not. Based on our findings, we propose some design guidelines for free-hand gesture-based interfaces.  相似文献   

11.
The paper discusses basic approaches to implementation of a graphical user interface (GUI) as virtual two- and three-dimensional environments for human-computer interaction. A design approach to virtual four-dimensional environments based on special visual effects is proposed. Functional capabilities of the FDC package that implements an environment prototype and principles of user operation are given.  相似文献   

12.
In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application's success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early '70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.  相似文献   

13.
Visualization plays a crucial role in molecular and structural biology. It has been successfully applied to a variety of tasks, including structural analysis and interactive drug design. While some of the challenges in this area can be overcome with more advanced visualization and interaction techniques, others are challenging primarily due to the limitations of the hardware devices used to interact with the visualized content. Consequently, visualization researchers are increasingly trying to take advantage of new technologies to facilitate the work of domain scientists. Some typical problems associated with classic 2D interfaces, such as regular desktop computers, are a lack of natural spatial understanding and interaction, and a limited field of view. These problems could be solved by immersive virtual environments and corresponding hardware, such as virtual reality head-mounted displays. Thus, researchers are investigating the potential of immersive virtual environments in the field of molecular visualization. There is already a body of work ranging from educational approaches to protein visualization to applications for collaborative drug design. This review focuses on molecular visualization in immersive virtual environments as a whole, aiming to cover this area comprehensively. We divide the existing papers into different groups based on their application areas, and types of tasks performed. Furthermore, we also include a list of available software tools. We conclude the report with a discussion of potential future research on molecular visualization in immersive environments.  相似文献   

14.
The natural user interface (NUI) has been investigated in a variety of fields in application software. This paper proposes an approach to generate virtual agents that can support users for NUI-based applications through human–robot interaction (HRI) learning in a virtual environment. Conventional human–robot interaction (HRI) learning is carried out by repeating processes that are time-consuming, complicated and dangerous because of certain features of robots. Therefore, a method is needed to train virtual agents that interact with virtual humans imitating human movements in a virtual environment. Then the result of this virtual agent can be applied to NUI-based interactive applications after the interaction learning is completed. The proposed method was applied to a model of a typical house in virtual environment with virtual human performing daily-life activities such as washing, eating, and watching TV. The results show that the virtual agent can predict a human’s intent, identify actions that are helpful to the human, and can provide services 16 % faster than a virtual agent trained using traditional Q-learning.  相似文献   

15.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

16.
17.
When interacting in a virtual environment, users are confronted with a number of interaction techniques. These interaction techniques may complement each other, but in some circumstances can be used interchangeably. Because of this situation, it is difficult for the user to determine which interaction technique to use. Furthermore, the use of multimodal feedback, such as haptics and sound, has proven beneficial for some, but not all, users. This complicates the development of such a virtual environment, as designers are not sure about the implications of the addition of interaction techniques and multimodal feedback. A promising approach for solving this problem lies in the use of adaptation and personalization. By incorporating knowledge of a user’s preferences and habits, the user interface should adapt to the current context of use. This could mean that only a subset of all possible interaction techniques is presented to the user. Alternatively, the interaction techniques themselves could be adapted, e.g. by changing the sensitivity or the nature of the feedback. In this paper, we propose a conceptual framework for realizing adaptive personalized interaction in virtual environments. We also discuss how to establish, verify and apply a user model, which forms the first and important step in implementing the proposed conceptual framework. This study results in general and individual user models, which are then verified to benefit users interacting in virtual environments. Furthermore, we conduct an investigation to examine how users react to a specific type of adaptation in virtual environments (i.e. switching between interaction techniques). When an adaptation is integrated in a virtual environment, users positively respond to this adaptation as their performance significantly improve and their level of frustration decrease.  相似文献   

18.
由于沉浸式环境下的三维交互方式对二维界面操作不够友好,使得依赖于二维列表界面的流场数据管理任务变得复杂且低效。为了实现沉浸式虚拟环境下对流场数据高效的组织和管理,增强用户对流场空间信息的理解,提出一种基于多视图结合交互的沉浸式流场可视化数据块管理方法。该方法构建了一个三维小视图用于提供场景概览,并通过“主视图交互+小视图辅助“”小视图交互+主视图反馈”等多种多视图组合交互方式完成对多块流场数据的管理交互操作。最后构建了一个基于手势的沉浸式流场可视化系统,定义多项交互任务,从学习时长、完成时间和用户反馈几个方面对比了多视图方法和传统交互方法差异。实验结果表明,相比于传统交互方法,多视图方法可以显著提高数据管理任务的效率。  相似文献   

19.
Knowledge of user movement in mobile environments paves the way for intelligent resource allocation and event scheduling for a variety of applications. Existing schemes for estimating user mobility are limited in their scope as they rely on repetitive patterns of user movement. Such patterns neither exist not easy to recognize in soft-real time, in open environments such as parks, malls, or streets. We propose a novel scheme for Real-time Mobility and Orientation Estimation for Mobile Environments (MOEME). MOEME employs the concept of temporal distances and uses logistic regression to make real time estimations about user movement. MOEME is also used to make predictions about the absolute orientation of users. MOEME relies only on opportunistic message exchange and is fully distributed, scalable, and requires neither a central infrastructure nor Global Positioning System. MOEME has been tested on real world and synthetic mobility traces—makes predictions about direction and count of users with up to 90% accuracy, enhances successful video downloads in shared environments.  相似文献   

20.
A gesture-based interaction system for smart homes is a part of a complex cyber-physical environment, for which researchers and developers need to address major challenges in providing personalized gesture interactions. However, current research efforts have not tackled the problem of personalized gesture recognition that often involves user identification. To address this problem, we propose in this work a new event-driven service-oriented framework called gesture services for cyber-physical environments (GS-CPE) that extends the architecture of our previous work gesture profile for web services (GPWS). To provide user identification functionality, GS-CPE introduces a two-phase cascading gesture password recognition algorithm for gesture-based user identification using a two-phase cascading classifier with the hidden Markov model and the Golden Section Search, which achieves an accuracy rate of 96.2% with a small training dataset. To support personalized gesture interaction, an enhanced version of the Dynamic Time Warping algorithm with multiple gestural input sources and dynamic template adaptation support is implemented. Our experimental results demonstrate the performance of the algorithm can achieve an average accuracy rate of 98.5% in practical scenarios. Comparison results reveal that GS-CPE has faster response time and higher accuracy rate than other gesture interaction systems designed for smart-home environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号