首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distributable user interfaces enable users to distribute user interface interaction objects (i.e. panels, buttons, input fields, checkboxes, etc.) across different displays using a set of distribution primitives to manipulate them in real time. This work presents how this kind of user interfaces facilitates the computer supported collaborative learning in modern classrooms. These classrooms provide teachers and students with display ecosystems consisting of stationary displays, such as smart projectors and smart TVs as well as mobile displays owned by teachers and students, such as smartphones, tablets, and laptops. The distribution of user interface interaction objects enables teachers to modify the user interface interaction objects that are available to students in real time to control and promote the collaboration and participation among them during learning activities. We propose the development of this type of applications using an extension of the CAMELEON reference framework that supports the definition of UI distribution models. The Essay exercise is presented as a case of study where teachers control the collaboration among students by distributing user interface interaction objects.  相似文献   

2.
Three-dimensional capabilities on mobile devices are increasing, and the interactivity is becoming a key feature of these tools. It is expected that users will actively engage with the 3D content, instead of being passive consumers. Because touch-screens provide a direct means of interaction with 3D content by directly touching and manipulating 3D graphical elements, touch-based interaction is a natural and appealing style of input for 3D applications. However, developing 3D interaction techniques for handheld devices using touch-screens is not a straightforward task. One issue is that when interacting with 3D objects, users occlude the object with their fingers. Furthermore, because the user’s finger covers a large area of the screen, the smallest size of the object users can touch is limited. In this paper, we first inspect existing 3D interaction techniques based on their performance with handheld devices. Then, we present a set of precise Dual-Finger 3D Interaction Techniques for a small display. Finally, we present the results of an experimental study, where we evaluate the usability, performance, and error rate of the proposed and existing 3D interaction techniques.  相似文献   

3.
The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. For this device, a user interface and new interaction techniques have been developed based on transparent props -a tracked hand-held pen and a pad. These props, particularly the pad, are augmented with 3D graphics from the Virtual Table's display that can serve as a palette for tools and controls as well as a window-like see-through interface, a plane-shaped and through-the-plane tool, supporting a variety of new interaction techniques. This paper reports on an extension of this user-interface design space which uses gestural input to create and control solid geometries for CAD and conceptual design. The application of gestural interfaces is a common method for interacting with virtual environments on a habitual and natural basis. The motion-based gesture recognition presented here uses Fuzzy Logic to support a predictable, flexible, and efficient learning process. This new interaction paradigm greatly increases the Virtual Table's suitability for design tasks. Traditional CAD dialogue can be combined with intuitive rapid sketching of geometry on the pad. Additionally, the resulting events and objects can be associated with scene details below the translucent tablet.  相似文献   

4.
The smart phone: a ubiquitous input device   总被引:7,自引:0,他引:7  
We show how modern mobile phones (Weiser's tabs) can interact with their environment, especially large situated displays (Weiser's boards). Smart phones' emerging capabilities are fueling a rise in the use of mobile phones as input devices to such resources as situated displays, vending machines, and home appliances. Mobile phones' prevalence gives them great potential to be the default physical interface for ubiquitous computing applications. We survey interaction techniques that use mobile phones as input devices to ubiquitous computing environments. We use smart phone to describe an enhanced mobile phone. Our analysis blurs the line between smart phones and PDAs such as the Palm Pilot because the feature sets continue to converge.  相似文献   

5.
Upcoming mobile devices will have flexible displays, allowing us to explore alternate forms of user authentication. On flexible displays, users can interact with the device by deforming the surface of the display through bending. In this paper, we present Bend Passwords, a new type of user authentication that uses bend gestures as its input modality. We ran three user studies to evaluate the usability and security of Bend Passwords and compared it to PINs on a mobile phone. Our first two studies evaluated the creation and memorability of user-chosen and system-assigned passwords. The third study looked at the security problem of shoulder-surfing passwords on mobile devices. Our results show that bend passwords are a promising authentication mechanism for flexible display devices. We provide eight design recommendations for implementing Bend Passwords on flexible display devices.  相似文献   

6.
Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi‐layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance.  相似文献   

7.
The use of large displays is becoming increasingly prevalent, but development of the usability of three-dimensional (3D) interaction with large displays is still in the early stage. One way to improve the usability of 3D interaction is to develop appropriate control–display (CD) gain function. Nevertheless, unlike in desktop environments, the effects of the relationship between control space and display space in 3D interaction have not been investigated. Moreover, 3D interaction with large displays is natural and intuitive similar to how we work in the physical world. Therefore, a CD gain function that considers human behavior might improve the usability of interaction with large displays. The first experiment was conducted to identify the characteristics of user’s natural hand motion and the user perception of target in distal pointing. Thirty people participated and the characteristics of users’ natural hand movements and the 3D coordinates of their pointing positions were derived. These characteristics were considered in development of motion–display (MD) gain which is a new position-to-position CD mapping. Then, MD gain was experimentally verified by comparing it with Laser pointing, which is currently the best existing CD mapping technique; 30 people participated. MD gain was superior to the existing pointing technique in terms of both performance and subjective satisfaction. MD gain can also be personalized for further improvement. This is an initial attempt to reflect natural human pointing gesture in distal pointing technique, and the developed technique (MD gain) was experimentally proved to be superior to the existing techniques. This achievement is worthy because even a marginal improvement in the performance of pointing task, which is a fundamental and frequent task, can have a large effect on users’ productivity. These results can be used as a resource to understand the characteristics of user’s natural hand movement, and MD gain can be directly applied to situations in which distal pointing is needed, such as interacting with smart TVs or with wall displays. Furthermore, the concept that maps natural human behavior in motor space and an object in visual space can be applied to any interactive system.  相似文献   

8.
Fast battery discharge is still the most nerve wracking issue for smartphone users. Though many energy saving methods have been studied, still users are not satisfied with their phone’s battery. Power management system provides low power mode when the phone is not in use for a long time. When the user is interacting with the phone, current system assumes the user is interactive and should keep the device in active mode. However, this is not true. After the user’s interaction, the device processes the request and displays the result on the smartphone’s output device. During this period, the user cannot see any meaningful information from the phone. In this paper, we propose a new low power mode where we put smartphone’s output device into low power mode while phone is preparing result for the user. We named this as o-sleep, an output-oriented power saving mode. While a device is processing a user’s request, output from the device may require preparation time. We consider the situation as the device’s output idle time and put the phone’s user interfaces into sleep mode while maintaining other subsystems in active state. To prove our concept, we have applied our technique onto various smartphone applications with varying operation environment. From the experiment, we found that the smartphone entered the o-sleep mode up to 58% of its total usage time in various test scenarios. Usability study supported feasibility of our proposed method.  相似文献   

9.
The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.  相似文献   

10.
In this work, we propose a new mode of interaction using hand gestures captured by the back camera of a mobile device. Using a simple intuitive two‐finger picking gesture, the user can perform mouse click and drag‐and‐drop operations from the back of the device, which provides an unobstructed touch‐free interaction. This method allows users to operate on a tablet and maintain full awareness of the display, which is especially suitable for certain working environments, such as the machine shop, garage, kitchen, gym, or construction site, where people's hands may be dirty, wet, or in gloves. The speed, accuracy, and error rate of this interaction have been evaluated and compared with the typical touch interaction in an empirical study. The results show that, although this method is not, in general, as efficient and accurate as direct touch, participants still consider it an effective and intuitive method of interacting with mobile devices in environments where direct touch is impractical.  相似文献   

11.
Hand‐held devices are also becoming computationally more powerful and being equipped with special sensors and non‐traditional displays for diverse applications aside from just making phone calls. As such, it raises the question of whether realizing virtual reality, providing a minimum level of immersion and presence, might be possible on a hand‐held device capable of only relatively “small” display. In this paper, we propose that motion based interaction can widen the perceived field of view (FOV) more than the actual physical FOV, and in turn, increase the sense of presence and immersion up to a level comparable to that of a desktop or projection display based VR systems. We have implemented a prototype hand‐held VR platform and conducted two experiments to verify our hypothesis. Our experimental study has revealed that when a motion based interaction was used, the FOV perceived by the user for the small hand held device was significantly greater than (around 50%) the actual. Other larger display platforms using the conventional button or mouse/keyboard interface did not exhibit such a phenomenon. In addition, the level of user felt presence in the hand‐held platform was higher than or comparable to those in VR platforms with larger displays. We hypothesize that this phenomenon is related to and analogous to the way the human vision system compensates for differences in acuity resolution in the eye/retina through the saccadic activity. The paper demonstrates the distinct possibility of realizing reasonable virtual reality even with devices with a small visual FOV and limited processing power. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
The context of mobility raises many issues for geospatial applications providing location-based services. Mobile device limitations, such as small user interface footprint and pen input whilst in motion, result in information overload on such devices and interfaces which are difficult to navigate and interact with. This has become a major issue as mobile GIS applications are now being used by a wide group of users, including novice users such as tourists, for whom it is essential to provide easy-to-use applications. Despite this, comparatively little research has been conducted to address the mobility problem. We are particularly concerned with the limited interaction techniques available to users of mobile GIS which play a primary role in contributing to the complexity of using such an application whilst mobile. As such, our research focuses on multimodal interfaces as a means to present users with a wider choice of modalities for interacting with mobile GIS applications. Multimodal interaction is particularly advantageous in a mobile context, enabling users of location-based applications to choose the mode of input that best suits their current task and location. The focus of this article concerns a comprehensive user study which demonstrates the benefits of multimodal interfaces for mobile geospatial applications.  相似文献   

13.
Introduction to building projection-based tiled display systems   总被引:7,自引:0,他引:7  
This tutorial introduces the concepts and technologies needed to build projector-based display systems. Tiled displays offer scalability, high resolution, and large formats for various applications. Tiled displays are an emerging technology for constructing semi-immersive visualization environments capable of presenting high-resolution images from scientific simulation. The largest impact may well arise from using large-format tiled displays as one of possibly multiple displays in building information or active spaces that surround the user with diverse ways of interacting with data and multimedia information flows. These environments may prove the ultimate successor to the desktop metaphor for information technology work. Several fundamental technological problems must be addressed to make tiled displays practical. These include: the choice of screen materials and support structures; choice of projectors, projector supports, and optional fine positioners; techniques for integrating image tiles into a seamless whole; interface devices for interaction with applications; display generators and interfaces; and the display software environment  相似文献   

14.
We present a mobile multi-touch interface for selecting, querying, and visually exploring data visualized on large, high-resolution displays. Although emerging large (e.g., ~10 m wide), high-resolution displays provide great potential for visualizing dense, complex datasets, their utility is often limited by a fundamental interaction problem – the need to interact with data from multiple positions around a large room. Our solution is a selection and querying interface that combines a hand-held multi-touch device with 6 degree-of-freedom tracking in the physical space that surrounds the large display. The interface leverages context from both the user's physical position in the room and the current data being visualized in order to interpret multi-touch gestures. It also utilizes progressive refinement, favoring several quick approximate gestures as opposed to a single complex input in order to most effectively map the small mobile multi-touch input space to the large display wall. The approach is evaluated through two interdisciplinary visualization applications: a multi-variate data visualization for social scientists, and a visual database querying tool for biochemistry. The interface was effective in both scenarios, leading to new domain-specific insights and suggesting valuable guidance for future developers.  相似文献   

15.
As more interactive surfaces enter public life, casual interactions from passersby are bound to increase. Most of these users can be expected to carry a mobile phone or PDA, which nowadays offers significant computing capabilities of its own. This offers new possibilities for interaction between these users’ private displays and large public ones. In this paper, we present a system that supports such casual interactions. We first explore a method to track mobile phones that are placed on a horizontal interactive surface by examining the shadows which are cast on the surface. This approach detects the presence of a mobile device, as opposed to any other opaque object, through the signal strength emitted by the built-in Bluetooth transceiver without requiring any modifications to the devices’ software or hardware. We then go on to investigate interaction between a Sudoku game running in parallel on the public display and on mobile devices carried by passing users. Mobile users can join a running game by placing their devices on a designated area. The only requirement is that the device is in discoverable Bluetooth mode. After a specific device has been recognized, a client software is sent to the device which then enables the user to interact with the running game. Finally, we explore the results of a study which we conducted to determine the effectiveness and intrusiveness of interactions between users on the tabletop and users with mobile devices.  相似文献   

16.
Interactive horizontal surfaces provide large semi-public or public displays for colocated collaboration. In many cases, users want to show, discuss, and copy personal information or media, which are typically stored on their mobile phones, on such a surface. This paper presents three novel direct interaction techniques (Select&Place2Share, Select&Touch2Share, and Shield&Share) that allow users to select in private which information they want to share on the surface. All techniques are based on physical contact between mobile phone and surface. Users touch the surface with their phone or place it on the surface to determine the location for information or media to be shared. We compared these three techniques with the most frequently reported approach that immediately shows all media files on the table after placing the phone on a shared surface. The results of our user study show that such privacy-preserving techniques are considered as crucial in this context and highlight in particular the advantages of Select&Place2Share and Select&Touch2Share in terms of user preferences, task load, and task completion time.  相似文献   

17.
Abstract— Power‐efficiency demands on mobile communications device displays have become severe with the emergence of full‐video‐capable cellular phones and mobile telephony services such as third‐generation (3G) networks. The display is the main culprit for power consumption in the mobile‐phone user interface and the backlight unit (BLU) of commonly used active‐matrix liquid‐crystal displays (AMLCDs) is the main power drain in the display. One way of reducing the power dissipation of a mobile liquid‐crystal display is to efficiently distribute and outcouple the light available in the backlight unit to direct the primary wavelength bands in a spectrum‐specific fashion through the respective color subpixels. This paper describes a diffractive‐optics approach for a novel backlight unit to realize this goal. A model grating structure was fabricated and the distribution of outcoupled light was studied. The results verify that the new BLU concept based on an array of spectrum‐specific gratings is feasible.  相似文献   

18.
A large body of HCI research focuses on devices and techniques to interact with applications in more natural ways, such as gestures or direct pointing with fingers or hands. In particular, recent years have seen a growing interest in laser pointer-style (LPS) interaction, which allows users to point directly at the screen from a distance through a device handled like a common laser pointer. Several LPS techniques have been evaluated in the literature, usually focusing on users' performance and subjective ratings, but not on the effects of these techniques on the musculoskeletal system. One cannot rule out that “natural” interaction techniques, although found attractive by users, require movements that might increase likelihood of musculoskeletal disorders (MSDs) with respect to traditional keyboard and mouse. Our study investigates the physiological effects of a LPS interaction technique (based on the Wii Remote) compared to a mouse and keyboard setup, used in a sitting and a standing posture. The task (object arrangement) is representative of user actions repeatedly carried out with 3D applications. The obtained results show that the LPS interaction caused more muscle exertion than mouse and keyboard. Posture played also a significant role. The results highlight the importance of extending current studies of novel interaction techniques with thorough electromyographic (EMG) analyses.  相似文献   

19.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

20.
We present MultiPoint, a set of perspective-based remote pointing techniques that allows users to perform bimanual and multi-finger remote manipulation of graphical objects on large displays. We conducted two empirical studies that compared remote pointing techniques performed using fingers and laser pointers, in single and multi-finger pointing interactions. We explored three types of manual selection gestures: squeeze, breach and trigger. The fastest and most preferred technique was the trigger gesture in the single point experiment and the unimanual breach gesture in the multi-finger pointing study. The laser pointer obtained mixed results: it is fast, but inaccurate in single point, and it obtained the lowest ranking and performance in the multipoint experiment. Our results suggest MultiPoint interaction techniques are superior in performance and accuracy to traditional laser pointers for interacting with graphical objects on a large display from a distance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号