首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The authors have developed a simulator to help with the design and evaluation of assistive interfaces. The simulator can predict possible interaction patterns when undertaking a task using a variety of input devices and estimate the time to complete the task in the presence of different disabilities. This paper presents a study to evaluate the simulator by considering a representative application of searching icons, which was being used by able-bodied, visually impaired and mobility-impaired people. The simulator predicted task completion times for all three groups with statistically significant accuracy. The simulator also predicted the effects of different interface designs on task completion time accurately. The simulator is used to develop inclusive digital TV interfaces. A case study is presented to investigate accessibility requirements of a representative digital TV interface.  相似文献   

2.
Emerging input modalities could facilitate more efficient user interactions with mobile devices. An end-user customization tool based on user-defined context-action rules lets users specify personal, multimodal interaction with smart phones and external appliances. The tool's input modalities include sensor-based, user-trainable free-form gestures; pointing with radio frequency tags; and implicit inputs based on such things as sensors, the Bluetooth environment, and phone platform events. The tool enables user-defined functionality through a blackboard-based context framework enhanced to manage the rule-based application control. Test results on a prototype implemented on a smart phone with real context sources show that rule-based customization helps end users efficiently customize their smart phones and use novel input modalities.  相似文献   

3.
This article presents a series of user studies to develop a new eye-gaze tracking–based pointing system. We developed a new target prediction model that works for different input modalities and combined the eye-gaze tracking–based pointing with a joystick controller that can reduce pointing and selection times. The system finds important applications in cockpit of combat aircraft and for computer novice users. User studies confirmed that users can perform significantly faster using this new eye-gaze tracking–based system for both military and everyday computing tasks compared to existing input devices. As part of the study it was also found that the amplitude of maximum power component obtained through Fourier Transform of pupil signal significantly correlates with selection times and perceived cognitive load of users in terms of Task Load Index scores.  相似文献   

4.
Remote pointing devices like the Wii remote have a wide range of applications and are becoming more important for the manipulation of and interactions with information on a distant display such as smart TVs. Because remote pointing devices are used without external support, however, muscular tremors and motional disparity between the display and motor space can result in usability problems of mouse jitters and instability. In this research, a solution is proposed to those problems using feedforward technology, where a user is provided with predictive information in multisensory modalities while approaching a target. Also, gender effect on the user experience of remote pointing devices is examined. By conducting two experiments and a survey, it was found that the feedforward signal plays a role of precue and is more effective than the typical feedback. It was also found that the modality variations in feedforward were impacted by the gender of the user. The findings can be used to improve user interfaces for remote pointing controllers.  相似文献   

5.
The use of pointing devices to input to interactive computer systems is increasing. This paper considers the human factors aspects of the use of a range of these devices. A distinction is drawn between direct and indirect devices. A number of aspects of pointing responses are discussed with reference to the experimental evidence: that pointing is non-linguistic, serial, involves gross motor movement and relies on visual feedback. The overall speed and the extent to which it is a natural form of response are also discussed. The paper ends with some conclusions about the suitability of different devices for different tasks.  相似文献   

6.
The authors present a robust, infrastructure-centric, and platform-independent approach to integrating information appliances into the iRoom, an interactive workspace. The Interactive Workspaces Project at Stanford explores new possibilities for people to work together in technology-rich spaces with computing and interaction devices on many different scales. It includes faculty and students from the areas of graphics, human-computer interaction (HCI), networking, ubiquitous computing, and databases, and draws on previous work in all those areas. We design and experiment with multidevice, multiuser environments based on a new architecture that makes it easy to create and add new display and input devices, to move work of all kinds from one computing device to another, and to support and facilitate group interactions. In the same way that today's standard operating systems make it feasible to write single-workstation software that uses multiple devices and networked resources, we are constructing a higher level operating system for the world of ubiquitous computing. We combine research on infrastructure (ways of flexibly configuring and connecting devices, processes, and communication links) with research on HCI (ways of interacting with heterogeneous changing collections of devices with multiple modalities)  相似文献   

7.
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.  相似文献   

8.
Virtual Reality-based simulation technology has evolved as a useful design and analysis tool at an early stage in the design for evaluating performance of human-operated agricultural and construction machinery. Detecting anomalies in the design prior to building physical prototypes and expensive testing leads to significant cost savings. The efficacy of such simulation technology depends on how realistically the simulation mimics the real-life operation of the machinery. It is therefore necessary to achieve ‘real-time’ dynamic simulation of such machines with operator-in-the-loop functionality. Such simulation often leads to intensive computational burdens. A distributed architecture was developed for off-road vehicle dynamic models and 3D graphics visualization to distribute the overall computational load of the system across multiple computational platforms. Multi-rate model simulation was also used to simulate various system dynamics with different integration time steps, so that the computational power can be distributed more intelligently. This architecture consisted of three major components: a dynamic model simulator, a virtual reality simulator for 3D graphics, and an interface to the controller and input hardware devices. Several off-road vehicle dynamics models were developed with varying degrees of fidelity, as well as automatic guidance controller models and a controller area network interface to embedded controllers and user input devices. The simulation architecture reduced the computational load to an individual machine and increased the real-time simulation capability with complex off-road vehicle system models and controllers. This architecture provides an environment to test virtual prototypes of the vehicle systems in real-time and the opportunity to test the functionality of newly developed controller software and hardware.  相似文献   

9.
IMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.  相似文献   

10.
Murata A  Iwase H 《Human factors》2005,47(4):767-776
The usability of a touch-panel interface was compared among young, middle-aged, and older adults. In addition, a performance model of a touch panel was developed so that pointing time could be predicted with higher accuracy. Moreover, the target location to which a participant could point most quickly was determined. The pointing time with a PC mouse was longer for the older adults than for the other age groups, whereas there were no significant differences in pointing time among the three age groups when a touch-panel interface was used. Pointing to the center of a square target led to the fastest pointing time among nine target locations. Based on these results, we offer some guidelines for the design of touch-panel interfaces and show implications for users of different age groups. Actual or potential applications of this research include designing touch-panel interfaces to make them accessible for older adults and predicting movement times when users operate such devices.  相似文献   

11.
During the past 45 years there has been a recurrence of interest on supporting sketching at electronic devices and interactive surfaces, and despite being sketching recognition fairly well addressed on the literature, the adoption of electronic sketching as a design tool is still a challenge.The current popularization of touch screen devices allows designers to sketch using their device of preference, while the current multi-platform capabilities made possible by HTML5 allows sketching systems to run on many devices at the same time. Those two factors combined might pose new opportunities for researchers to explore how designers use sketching on flexible setups by combining heterogeneous sketching devices for design sessions.This may arise new possibilities in the field of prototyping user interfaces since, by using such multi-platform systems, designers would now be able of designing interfaces for multiple devices by producing and testing them on the device itself.This paper reports a pilot experiment conducted with 6 developers, grouped into pairs on design sessions using Gambit – a multi-platform sketching system that provides a lightweight approach for prototyping user interfaces for many devices at once. We performed a discourse analysis of the professionals based on recorded videos of interviews conducted during and after design sessions with the system and aggregated the data in order to investigate the main requirements for multi-platform sketching systems.  相似文献   

12.
A discrete event simulator named DDPSIM for distributed processing networks is presented. The simulator which was initially developed to support the design of a distributed processing system for an air defense application, has continued to evolve to accommodate a more general class of bus-connected networks in which software modules, or tasks, are distributed. It has also been used to model a complex air traffic control application which involved multiple networks. The modular organization and various distributed network component models of the simulator are described in detail. The simulator provides user friendly interfaces that include simple input conventions for system definition, and tabular and graphic outputs for comprehensive analyses of the simulation results. A relatively complex distributed processing system for air traffic control is used as an example to illustrate the simulation procedure and to show the simulation results. To conclude, the lessons learned from the experience in simulation are discussed.  相似文献   

13.
Multimodal interfaces have attracted more and more attention. Most researches focus on each interaction mode independently and then fuse information at the application level. Recently, several frameworks and models have been proposed to support the design and development of multimodal interfaces. However, it is challenging to provide automatic modality adaptation in multimodal interfaces. Existing approaches are using rule-based specifications to define the adaptation of input/output modalities. Rule-based specifications have the problems of completeness and coherence. Distinct from previous work, this paper presents a novel approach that quantifies the user preference of each modality and considers the adaptation as an optimization issue that searches for a set of input/output modalities matching user's preference. Our approach applies a cross-layer design, which considers the adaptation from the perspectives of the interaction context, available system resources, and QoS requirements. Furthermore, our approach supports human-centric adaptation. A user can report the preference of a modality so that selected modalities fit user's personal needs. An optimal solution and a heuristic algorithm have been developed to automatically select an appropriate set of modality combinations under a specific situation. We have designed a framework based on the heuristic algorithm and existing ontology, and applied the framework to conduct a utility evaluation, in which we have employed a within-subject experiment. Fifty participants were invited to go through three scenarios and compare automatically selected modalities with randomly selected modalities. The results from the experiment show that users perceived the automatically selected modalities as appropriate and satisfactory.  相似文献   

14.
15.
A method called “SymbolDesign” is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices, such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device, and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.  相似文献   

16.
Small devices such as personal digital assistants (PDAs) are widely used to access the World Wide Web (Web). However, accessing the Web from small devices is affected by poor interface bandwidth, such as small keyboards and limited pointing devices. There is little empirical work investigating the input difficulties caused by such insufficient facilities, however, anecdotal evidence suggests that there is a link between able-bodied users of the mobile Web and motor impaired users of the Web on desktop computers. This being the case we could transfer the solutions which already exists for motor impaired users into the mobile Web and vice versa. This paper presents a user study that investigates the input errors of mobile Web users in both typing and pointing. The study identifies six types of typing errors and three types of pointing errors shared between our two user domains. We find that mobile Web users often confuse the different characters located on the same key, press keys that are adjacent to the target key, and miss certain key presses. When using a stylus, they also click in the wrong places, slide the stylus during multiple clicks, and make errors when dragging. Our results confirm that despite using different input devices, mobile Web users share common problems with motor impaired desktop users; and we therefore surmise that it will be beneficial to transfer available solutions between these user domains in order to address their common problems.  相似文献   

17.
In the last decades, we have witnessed a growing interest toward touchless gestural user interfaces. Among other reasons, this is due to the large availability of different low‐cost gesture acquisition hardware (the so‐called “Kinect‐like devices”). As a consequence, there is a growing need for solutions that allow to easily integrate such devices within actual systems. In this paper, we present KIND‐DAMA, an open and modular middleware that helps in the development of interactive applications based on gestural input. We first review the existing middlewares for gestural data management. Then, we describe the proposed architecture and compare its features against the existing similar solutions we found in the literature. Finally, we present a set of studies and use cases that show the effectiveness of our proposal in some possible real‐world scenarios.  相似文献   

18.
The Garnet research project, which is creating a set of tools to aid the design and implementation of highly interactive, graphical, direct-manipulation user interfaces, is discussed. Garnet also helps designers rapidly develop prototypes for different interfaces and explore various user-interface metaphors during early product design. It emphasizes easy specification of object behavior, often by demonstration and without programming. Garnet contains a number of different components grouped into two layers. The Garnet Toolkit (the lower layer) supplies the object-oriented graphics system and constraints, a set of techniques for specifying the objects' interactive behavior in response to the input devices, and a collection of interaction techniques. On top of the Garnet Toolkit layer are a number of tools to make creating user interfaces easier. The components of both layers are described  相似文献   

19.
Considerable research has been done on using information from multiple modalities, like hands, facial gestures or speech, for better interaction between humans and computers, and many promising human–computer interfaces (HCI) have been developed in recent years. However, most of the current HCI systems have a few drawbacks: firstly, they are highly dependent on the performance of individual sensors. S econdly, the information fusion process from these sensors tends to ignore the semantic nature of the modalities, which may reinforce or clarify each other over time. Finally, they are not robust enough at representing the imprecise nature of human gestures, since individual gestures are highly ambiguous in themselves. In this paper, we propose an approach for the semantic fusion of different input modalities, based on transferable belief models. We show that this approach allows for a better representation of the ambiguity involved in recognizing gestures. Ambiguity is resolved by combining the beliefs of the individual sensors on the input information, to form new extended concepts, based on a pre-defined domain specific knowledge base, represented by conceptual graphs. We apply this technique to a multimodal system consisting of a hand gesture recognition sensor and a brain computing interface. It is shown that the technique can successfully combine individual gestures obtained from the two sensors, to form meaningful concepts and resolve ambiguity. The advantage of this approach is that it is robust even if one of the sensors is inefficient or has no input. Another important feature is its scalability, wherein more input modalities, like speech or facial gestures, can be easily integrated into the system at minimal cost, to form a comprehensive HCI interface.  相似文献   

20.
Touch-based interaction with computing devices is becoming more and more common. In order to design for this setting, it is critical to understand the basic human factors of touch interactions such as tapping and dragging; however, there is relatively little empirical research in this area, particularly for touch-based dragging.To provide foundational knowledge in this area, and to help designers understand the human factors of touch-based interactions, we conducted an experiment using three input devices (the finger, a stylus, and a mouse as a performance baseline) and three different pointing activities. The pointing activities were bidirectional tapping, one-dimensional dragging, and radial dragging (pointing to items arranged in a circle around the cursor). Tapping activities represent the elemental target selection method and are analysed as a performance baseline. Dragging is also a basic interaction method and understanding its performance is important for touch-based interfaces because it involves relatively high contact friction. Radial dragging is also important for touch-based systems as this technique is claimed to be well suited to direct input yet radial selections normally involve the relatively unstudied dragging action, and there have been few studies of the interaction mechanics of radial dragging. Performance models of tap, drag, and radial dragging are analysed.For tapping tasks, we confirm prior results showing finger pointing to be faster than the stylus/mouse but inaccurate, particularly with small targets. In dragging tasks, we also confirm that finger input is slower than the mouse and stylus, probably due to the relatively high surface friction. Dragging errors were low in all conditions. As expected, performance conformed to Fitts' Law.Our results for radial dragging are new, showing that errors, task time and movement distance are all linearly correlated with number of items available. We demonstrate that this performance is modelled by the Steering Law (where the tunnel width increases with movement distance) rather than Fitts' Law. Other radial dragging results showed that the stylus is fastest, followed by the mouse and finger, but that the stylus has the highest error rate of the three devices. Finger selections in the North-West direction were particularly slow and error prone, possibly due to a tendency for the finger to stick–slip when dragging in that direction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号