首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Manipulating and assembling elements in a 3D space is a task which interests a huge number of potential applications whether they deal with real or abstract objects. Direct manipulation techniques in traditional interactive systems use 2D devices and do not allow an easy manipulation of 3D objects. To facilitate user interaction, we have studied direct manipulation techniques in a virtual reality environment. A VR interface is naturally object-oriented and allows the definition of real-world metaphors. Operators can thus work in the virtual world in a similar way to the real world: they perceive the position of objects through the depth cue of stereo view, and can grab and push them in any direction by means of avirtual handuntil they reach their destination. They can put an object on top of another and line it up with other objects. We model the virtual world as ajob-orientedworld which is governed by a few simple rules which facilitate object positioning. In this paper, we describe the design and implementation strategies to obtain a real-time performance on a low-level workstation.  相似文献   

2.
3D object selection is more demanding when, 1) objects densly surround the target object, 2) the target object is significantly occluded, and 3) when the target object is dynamically changing location. Most 3D selection techniques and guidelines were developed and tested on static or mostly sparse environments. In contrast, games tend to incorporate densly packed and dynamic objects as part of their typical interaction. With the increasing popularity of 3D selection in games using hand gestures or motion controllers, our current understanding of 3D selection needs revision. We present a study that compared four different selection techniques under five different scenarios based on varying object density and motion dynamics. We utilized two existing techniques, Raycasting and SQUAD, and developed two variations of them, Zoom and Expand, using iterative design. Our results indicate that while Raycasting and SQUAD both have weaknesses in terms of speed and accuracy in dense and dynamic environments, by making small modifications to them (i.e., flavoring), we can achieve significant performance increases.  相似文献   

3.
A Survey of 3D Interaction Techniques   总被引:5,自引:0,他引:5  
Recent gains in the performance of 3D graphics hardware and rendering systems have not been matched by a corresponding improvement in our knowledge of how to interact with the virtual environments we create; therefore there is a need to examine these further if we are to improve the overall quality of our interactive 3D systems. This paper examines some of the interaction techniques which have been developed for object manipulation, navigation and application control in 3D virtual environments. The use of both mouse-based techniques and 3D input devices is considered, along with the role of feedback and some aspects of tools and widgets.  相似文献   

4.
The usability of three-dimensional (3D) interaction techniques depends upon both the interface software and the physical devices used. However, little research has addressed the issue of mapping 3D input devices to interaction techniques and applications. This is especially crucial in the field of Virtual Environments (VEs), where there exists a wide range of potential 3D input devices. In this paper, we discuss the use of Pinch Gloves™ – gloves that report contact between two or more fingers – as input devices for VE systems. We begin with an analysis of the advantages and disadvantages of the gloves as a 3D input device. Next, we present a broad overview of three novel interaction techniques we have developed using the gloves, including a menu system, a text input technique, and a two-handed navigation technique. All three of these techniques have been evaluated for both usability and task performance. Finally, we speculate on further uses for the gloves.  相似文献   

5.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

6.
The use of large displays is becoming increasingly prevalent, but development of the usability of three-dimensional (3D) interaction with large displays is still in the early stage. One way to improve the usability of 3D interaction is to develop appropriate control–display (CD) gain function. Nevertheless, unlike in desktop environments, the effects of the relationship between control space and display space in 3D interaction have not been investigated. Moreover, 3D interaction with large displays is natural and intuitive similar to how we work in the physical world. Therefore, a CD gain function that considers human behavior might improve the usability of interaction with large displays. The first experiment was conducted to identify the characteristics of user’s natural hand motion and the user perception of target in distal pointing. Thirty people participated and the characteristics of users’ natural hand movements and the 3D coordinates of their pointing positions were derived. These characteristics were considered in development of motion–display (MD) gain which is a new position-to-position CD mapping. Then, MD gain was experimentally verified by comparing it with Laser pointing, which is currently the best existing CD mapping technique; 30 people participated. MD gain was superior to the existing pointing technique in terms of both performance and subjective satisfaction. MD gain can also be personalized for further improvement. This is an initial attempt to reflect natural human pointing gesture in distal pointing technique, and the developed technique (MD gain) was experimentally proved to be superior to the existing techniques. This achievement is worthy because even a marginal improvement in the performance of pointing task, which is a fundamental and frequent task, can have a large effect on users’ productivity. These results can be used as a resource to understand the characteristics of user’s natural hand movement, and MD gain can be directly applied to situations in which distal pointing is needed, such as interacting with smart TVs or with wall displays. Furthermore, the concept that maps natural human behavior in motor space and an object in visual space can be applied to any interactive system.  相似文献   

7.
《Computers & Graphics》2012,36(8):1119-1131
Multi-touch interfaces have emerged with the widespread use of smartphones. Although a lot of people interact with 2D applications through touchscreens, interaction with 3D applications remains little explored. Most 3D object manipulation techniques have been created by designers who have generally put users aside from the design creation process. We conducted a user study to better understand how non-technical users tend to interact with a 3D object from touchscreen inputs. The experiment consists of 3D cube manipulations along three viewpoints for rotations, scaling and translations (RST). Sixteen users participated and 432 gestures were analyzed. To classify data, we introduce a taxonomy for 3D manipulation gestures with touchscreens. Then, we identify a set of strategies employed by users to perform the proposed cube transformations. Our findings suggest that each participant uses several strategies with a predominant one. Furthermore, we conducted a study to compare touchscreen and mouse interaction for 3D object manipulations. The results suggest that gestures are different according to the device, and touchscreens are preferred for the proposed tasks. Finally, we propose some guidelines to help designers in the creation of more user friendly tools.  相似文献   

8.
In this paper, we present SAGE2, a software framework that enables local and remote collaboration on Scalable Resolution Display Environments (SRDE). An SRDE can be any configuration of displays, ranging from a single monitor to a wall of tiled flat-panel displays. SAGE2 creates a seamless ultra-high resolution desktop across the SRDE. Users can wirelessly connect to the SRDE with their own devices in order to interact with the system. Many users can simultaneously utilize a drag-and-drop interface to transfer local documents and show them on the SRDE, use a mouse pointer and keyboard to interact with existing content that is on the SRDE and share their screen so that it is viewable to all. SAGE2 can be used in many configurations and is able to support many communities working with various types of media and high-resolution content, from research meetings to creative session to education.SAGE2 is browser-based, utilizing a web server to host content, WebSockets for message passing and HTML with JavaScript for rendering and interaction. Recent web developments, with the emergence of HTML5, have allowed browsers to use advanced rendering techniques without requiring plug-ins (canvas drawing, WebGL 3D rendering, native video player, etc.). One major benefit of browser-based software is that there are no installation requirements for users and it is inherently cross-platform. A user simply needs a web browser on the device he/she wishes to use as an interaction tool for the SRDE. This lowers considerably the barrier of entry to engage in meaningful collaboration sessions.  相似文献   

9.
The discussion on the advantages and disadvantages of 2D, 3D, and VR interfaces and their applicability to different types of systems, users, and information led to a series of stand-alone implementations that lack the possibility of realizing an integrated approach. The acceptance of the different interaction techniques will depend on their success in practical applications, i.e. with systems that are used by different users for different purposes. Since this acceptance is especially hard to achieve in computer-critical environments, such as medicine, we developed a software environment that allows for the development, integration, and user-centered evaluation of existing and new interaction techniques for their use in medical applications. This environment is equipped with an innovative message-passing functionality that provides the communication to and among application objects in 2D, 3D, and VR. Furthermore, the environment contains a component for user-adapted interaction and system support at runtime.  相似文献   

10.
This paper describes concepts, design, implementation, and performance evaluation of a 3D-based user interface for accessing IoT-based Smart Environments (IoT-SE). The generic interaction model of the described work addresses some major challenges of Human-IoT-SE-Interaction such as cognitive overload associated with manual device selection in complex IoT-SE, loss of user control, missing system image or over-automation. To address these challenges we propose a 3D-based mobile interface for mixed-initiative interaction in IoT-SE. The 3D visualization and 3D UI, acting as the central feature of the system, create a logical link between physical devices and their virtual representation on the end user’s mobile devices. By so doing, the user can easily identify a device within the environment based on its position, orientation, and form, and access the identified devices through the 3D interface for direct manipulation within the scene. This overcomes the problem of manual device selection. In addition, the 3D visualization provides a system image for the IoT-SE, which supports users in understanding the ambience and things going on in it. Furthermore, the mobile interface allows users to control the amount and the way the IoT-SE automates the environment. For example, users can stop or postpone system triggered automatic actions, if they don’t like or want them. Users also can remove a rule forever. By so doing, users can delete smart behaviors of their IoT-SE. This helps to overcome the automation challenges. In this paper, we present the design, implementation and evaluation of the proposed interaction system. We chose smart meeting rooms as the context for prototyping and evaluating our interaction concepts. However, the presented concepts and methods are generic and could be adapted to similar environments such as smart homes. We conducted a subjective usability evaluation (ISO-Norm 9241/110) with 16 users. All in one the study results indicate that the proposed 3D-User Interface achieved a good high score according to the ISO-Norm scores.  相似文献   

11.
A large body of HCI research focuses on devices and techniques to interact with applications in more natural ways, such as gestures or direct pointing with fingers or hands. In particular, recent years have seen a growing interest in laser pointer-style (LPS) interaction, which allows users to point directly at the screen from a distance through a device handled like a common laser pointer. Several LPS techniques have been evaluated in the literature, usually focusing on users' performance and subjective ratings, but not on the effects of these techniques on the musculoskeletal system. One cannot rule out that “natural” interaction techniques, although found attractive by users, require movements that might increase likelihood of musculoskeletal disorders (MSDs) with respect to traditional keyboard and mouse. Our study investigates the physiological effects of a LPS interaction technique (based on the Wii Remote) compared to a mouse and keyboard setup, used in a sitting and a standing posture. The task (object arrangement) is representative of user actions repeatedly carried out with 3D applications. The obtained results show that the LPS interaction caused more muscle exertion than mouse and keyboard. Posture played also a significant role. The results highlight the importance of extending current studies of novel interaction techniques with thorough electromyographic (EMG) analyses.  相似文献   

12.
Managing the geometry of a 3D scene efficiently is a key aspect of an interactive 3D application. This aspect is more important if we target at portable devices, which have limited hardware capabilities. Developing new means for improving the interaction with 3D content in mobile devices is key. The aim of this work is to present a technique which can manage the level-of-detail of 3D meshes in portable devices. This solution has been devised considering the restrictions that this kind of devices poses. The results section shows how the integration has been successful while obtaining good performance.  相似文献   

13.
Three-dimensional models of objects and their creation process are central for a variety of applications in Augmented Reality. In this article, we present a system that is designed for in-situ modeling using interactive techniques for two generic versions of handheld devices equipped with cameras. The system allows online building of 3D wireframe models through a combination of user interaction and automated methods. In particular, we concentrate in rigorous evaluation of the two devices and interaction methods in the context of 3D feature selection. We present the key components of our system, discuss our findings and results and identify design recommendations.  相似文献   

14.
We present a pressure‐augmented tactile 3D data navigation technique, specifically designed for small devices, motivated by the need to support the interactive visualization beyond traditional workstations. While touch input has been studied extensively on large screens, current techniques do not scale to small and portable devices. We use phone‐based pressure sensing with a binary mapping to separate interaction degrees of freedom (DOF) and thus allow users to easily select different manipulation schemes (e. g., users first perform only rotation and then with a simple pressure input to switch to translation). We compare our technique to traditional 3D‐RST (rotation, scaling, translation) using a docking task in a controlled experiment. The results show that our technique increases the accuracy of interaction, with limited impact on speed. We discuss the implications for 3D interaction design and verify that our results extend to older devices with pseudo pressure and are valid in realistic phone usage scenarios.  相似文献   

15.
Increasing availability of high quality 3D printing devices and services now enable ordinary people to create, edit and repair products for their custom needs. However, an effective use of current 3D modeling and design software is still a challenge for most novice users. In this work, we introduce a new computational method to automatically generate an organic interface structure that allows existing objects to be statically supported within a prescribed physical environment. Taking the digital model of the environment and a set of points that the generated structure should touch as an input, our biologically inspired growth algorithm automatically produces a support structure that when physically fabricated helps keep the target object in the desired position and orientation. The proposed growth algorithm uses an attractor based form generation process based on the space colonization algorithm and introduces a novel target attractor concept. Moreover, obstacle avoidance, symmetrical growth, smoothing and sketch modification techniques have been developed to adapt the nature inspired growth algorithm into a design tool that is interactive with the design space. We present the details of our technique and illustrate its use on a collection of examples from different categories.  相似文献   

16.
Content-based shape retrieval techniques can facilitate 3D model resource reuse, 3D model modeling, object recognition, and 3D content classification.Recently more and more researchers have attempted to solve the problems of partial retrieval in the domain of computer graphics, vision, CAD, and multimedia.Unfortunately, in the literature, there is little comprehensive discussion on the state-of-the-art methods of partial shape retrieval.In this article we focus on reviewing the partial shape retrieval methods over the last decade, and help novices to grasp latest developments in this field.We first give the definition of partial retrieval and discuss its desirable capabilities.Secondly, we classify the existing methods on partial shape retrieval into three classes by several criteria, describe the main ideas and techniques for each class, and detailedly compare their advantages and limits.We also present several relevant 3D datasets and corresponding evaluation metrics, which are necessary for evaluating partial retrieval performance.Finally, we discuss possible research directions to address partial shape retrieval.  相似文献   

17.
We present an interface for 3D object manipulation in which standard transformation tools are replaced with transient 3D widgets invoked by sketching context‐dependent strokes. The widgets are automatically aligned to axes and planes determined by the user's stroke. Sketched pivot‐points further expand the interaction vocabulary. Using gestural commands, these basic elements can be assembled into dynamic, user‐constructed 3D transformation systems. We supplement precise widget interaction with techniques for coarse object positioning and snapping. Our approach, which is implemented within a broader sketch‐based modeling system, also integrates an underlying “widget history” to enable the fluid transfer of widgets between objects. An evaluation indicates that users familiar with 3D manipulation concepts can be taught how to efficiently use our system in under an hour.  相似文献   

18.
Mobile Social Networks (MSNs) facilitate connections between mobile devices, and are capable of providing an effective mobile computing environment for users to access, share, and distribute information. However, MSNs are virtual social spaces, the available information may not be trustworthy to all. Therefore, trust inference plays a critical role for establishing social links between mobile users. In MSNs, users’ transactions will more and more be complemented with group contact. Hence, future usage patterns of mobile devices will involve more group contacts. In this paper, we describe the implicit social behavioral graph, i.e., ego-i graph which is formed by users’ contacts, and present an algorithm for initiating ego-i graph. We rate these relationships to form a dynamic contact rank, which enables users to evaluate the trust values between users within the context of MSNs. We, then, calculate group-based trust values according to the level of contacts, interaction evolution, and users’ attributes. Based on group-based trust, we obtain a cluster trust by the aggregation of inter group-based trust values. Due to the unique nature of MSNs, we discuss the propagation of cluster trust values for global MSNs. Finally, we evaluate the performance of our trust model through simulations, and the results demonstrate the effectiveness of group-based behavioural relationships in MSNs’ information sharing system.  相似文献   

19.
Accessibility revolves around building products, including electronic devices and digital content, so that diverse users can conveniently utilize them, irrespective of their capabilities. In recent years, the concept of touchscreen accessibility has gained a remarkable attention, especially within the considerable reliance on mobile touchscreen devices (MTDs) for information acquisition and dissemination as we witness nowadays. For users who are visually impaired, MTDs unlock different opportunities for independence and functioning. Thus, with the increasing ubiquity of MTDs and their potential extensive utility for all demographics, it becomes paramount to ensure that these devices and the content delivered on them are accessible. And while it might seem straightforward to achieve accessibility on MTDs, attaining this outcome is governed by an interplay between different elements. These involve platform (i.e., operating system) built-in support of accessibility features, content rendering modalities and structures pertaining to user needs and the peculiarities of MTDs as informed in standard accessibility guidelines, user studies uncovering preferences and best practices while interacting with MTDs, national legislations and policies, and the use of third-party devices such as assistive technologies. In this paper, mobile touchscreen accessibility for users who are visually impaired is surveyed with focus on three aspects: (1) the existing built-in accessibility features within popular mobile platforms; (2) the nature of non-visual interaction and how users who are visually impaired access, navigate, and create content on MTDs; and (3) the studies that tackled different issues pertaining to touchscreen accessibility, such as extraction of user needs and interaction preferences, identification of most critical accessibility problems encountered on MTDs, integrating mobile accessibility in standard accessibility guidelines, and investigation of existing guidelines in terms of sufficiency and appropriateness.  相似文献   

20.
Pointing gestures are our natural way of referencing distant objects and thus widely used in HCI for controlling devices. Due to current pointing models’ inherent inaccuracies, most of the systems using pointing gestures so far rely on visual feedback showing users where they point at. However, in many environments, e.g., smart homes, it is rarely possible to display cursors since most devices do not contain a display. Therefore, we raise the question of how to facilitate accurate pointing-based interaction in a cursorless context. In this paper we present two user studies showing that previous cursorless techniques are rather inaccurate as they lack important considerations about users’ characteristics that would help in minimizing inaccuracy. We show that pointing accuracy could be significantly improved by acknowledging users’ handedness and ocular dominance. In a first user study (n=?33), we reveal the large effect of ocular dominance and handedness on human pointing behavior. Current ray-casting techniques neglect both ocular dominance and handedness as effects onto pointing behavior, precluding them from accurate cursorless selection. With a second user study (n=?25), we show that accounting for ocular dominance and handedness yields to significantly more accurate selections compared to two previously published ray-casting techniques. This speaks for the importance of considering users’ characteristics further to develop better selection techniques to foster more robust accurate selections.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号