首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application's success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early '70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.  相似文献   

2.
With the advent of Web 2.0, the number of IT systems used in university courses is growing. Yet research consistently shows that a significant proportion of students are anxious about computer use. The quality of first experience with computers has been consistently mentioned as a significant contributor to anxiety onset. However the effect of users’ first experience on system related anxiety has not to the authors’ knowledge been researched using controlled experiments. Indeed little experiment based research has been conducted on the wiki user experience, specifically users’ evaluations and emotional reactions towards editing. This research uses usability engineering principles to engineer four different wiki experiences for novice wiki users and measures the effect each has on usability, anxiety during editing and on anxiety about future wiki editing. Each experience varied in the type of training spaces available before completing six live wiki editing tasks. We found that anxiety experienced by users was not related to computer anxiety but was wiki specific. Users in the in-built tutorial conditions also rated the usability of the editing interface higher than users in the non-tutorial conditions. The tutorial conditions also led to a significant reduction in wiki anxiety during interaction but did not significantly affect future editing anxiety. The findings suggest that the use of an in-built tutorial reduces emotional and technological barriers to wiki editing and that controlled experiments can help in discovering how aspects of the system experience can be designed to affect usability and anxiety towards editing wikis.  相似文献   

3.
Usability tests are a part of user-centered design. Usability testing with disabled people is necessary, if they are among the potential users. Several researchers have already investigated usability methods with sighted people. However, research with blind users is insufficient, for example, due to different knowledge on the use of assistive technologies and the ability to analyze usability issues from inspection of non-visual output of assistive devices. From here, the authors aspire to extend theory and practice by investigating four usability methods involving the blind, visually impaired and sighted people. These usability methods comprise of local test, synchronous remote test, tactile paper prototyping and computer-based prototyping. In terms of effectiveness of evaluation and the experience of participants and the facilitator, local tests were compared with synchronous remote tests and tactile paper prototyping with computer-based prototyping. Through the comparison of local and synchronous remote tests, it has been found that the number of usability problems uncovered in different categories with both approaches was comparable. In terms of task completion time, there is a significant difference for blind participants, but not for the visually impaired and sighted. Most of the blind and visually impaired participants prefer the local test. As for the comparison of tactile paper prototyping and computer-based prototyping, it has been revealed that tactile paper prototyping provides a better overview of an application while the interaction with computer-based prototypes is closer to reality. Problems regarding the planning and conducting of these methods as they arise in particular with blind people were also discussed. Based on the authors’ experiences, recommendations were provided for dealing with these problems from both the technical and the organization perspectives.  相似文献   

4.
Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.  相似文献   

5.
Neuro-cognitively inspired haptic user interfaces   总被引:1,自引:1,他引:0  
Haptic systems and devices are a recent addition to multimodal systems. These devices have widespread applications such as surgical simulations, medical and procedural training, scientific visualizations, assistive and rehabilitative devices for individuals who have physical or neurological impediments and assistive devices for individuals who are blind. While the potential of haptics in natural human machine interaction is undisputable, the realization of such means is still a long way ahead. There are considerable research challenges to development of natural haptic interfaces. The study of human tactile abilities is a recent endeavor and many of the available systems still do not incorporate the domain knowledge of psychophysics, biomechanics and neurological elements of haptic perception. Development of smart and effective haptic interfaces and devices requires extensive studies that link perceptual phenomena with measurable parameters and incorporation of such domain knowledge in the engineering of haptic interfaces. This paper presents design, development and usability testing of a neuro-cognitively inspired haptic user interface for individuals who are blind. The proposed system design is inspired by neuro-cognitive basis of haptic perception and incorporates the computational aspects and requirements of multimodal information processing system. Usability testing of the system suggests that a biologically inspired haptic user interfaces may form a powerful paradigm for haptic user interface design.
Sethuraman PanchanathanEmail:
  相似文献   

6.
This paper examines and compares the usability problems associated with eye-based and head-based assistive technology pointing devices when used for direct manipulation on a standard graphical user interface. It discusses and examines the pros and cons of eye-based pointing in comparison to the established assistive technology technique of head-based pointing and illustrates the usability factors responsible for the apparent low usage or unpopularity of eye-based pointing. It shows that user experience and target size on the interface are the predominant factors affecting eye-based pointing and suggests that these could be overcome to enable eye-based pointing to be a viable and available direct manipulation interaction technique for the motor-disabled community.  相似文献   

7.
《Ergonomics》2012,55(13-14):1386-1407
Usability and affective issues of using automatic speech recognition technology to interact with an automated teller machine (ATM) are investigated in two experiments. The first uncovered dialogue patterns of ATM users for the purpose of designing the user interface for a simulated speech ATM system. Applying the Wizard-of-Oz methodology, multiple mapping and word spotting techniques, the speech driven ATM accommodates bilingual users of Bahasa Melayu and English. The second experiment evaluates the usability of a hybrid speech ATM, comparing it with a simulated manual ATM. The aim is to investigate how natural and fun can talking to a speech ATM be for these first-time users. Subjects performed the withdrawal and balance enquiry tasks. The ANOVA was performed on the usability and affective data. The results showed significant differences between systems in the ability to complete the tasks as well as in transaction errors. Performance was measured on the time taken by subjects to complete the task and the number of speech recognition errors that occurred. On the basis of user emotions, it can be said that the hybrid speech system enabled pleasurable interaction. Despite the limitations of speech recognition technology, users are set to talk to the ATM when it becomes available for public use.  相似文献   

8.
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.  相似文献   

9.
Speech applications are typically designed to be used without any instructions or manuals. More complex applications commonly come with web-based or printed manuals. An alternative approach, software tutoring, has been studied in the context of graphical user interfaces. In software tutoring, a software component guides users while they work with an application new to them. To evaluate the viability of software tutoring in speech-based applications a two-condition between-participants experiment (N = 18) was conducted. Participants learned to use a speech-based e-mail reading application and performed several tasks with it. In the first condition the e-mail application included an embedded tutoring component that guided the participants on using the application. In the second condition, a web manual was used. All interactions with the systems were recorded and annotated. Participants also filled in questionnaires that reported their attitudes towards the guidance they received and towards the e-mail reading application. The speech-based tutor performed equally well with web-based manual with no significant differences between the conditions on how well the participants managed to accomplish the tasks with the e-mail application or in participants’ attitudes towards the application or the guidance. In addition, during the learning period the participants in the tutored condition had significantly fewer problems with the speech interface.  相似文献   

10.
Traditionally, human factors specialists have had a rather severe attitude toward human performance with computers: their goal was maximum throughput, often measured in transactions per minute. This attitude was justified when computers were mainly work-related; in some cases it still proves wise. For example, a usability improvement that shaves one second off the time it takes a directory-assistance operator to search a database for a telephone number saves several million dollars per year in the US alone. This performance-obsessed approach to usability led many early user interface experts to condemn the popular term “user friendly” with the argument that users didn't need “friendly” computers, they needed efficient designs that let them complete their tasks faster  相似文献   

11.
This article describes a research project aimed at improving search engine usability for sightless persons who use assistive technology to navigate the web. At the beginning of this research, a preliminary study was performed concerning accessibility and usability of search tools, and eight guidelines were formulated for designing search engine user interfaces. Then, the derived guidelines were applied in modifying the source code of Google’s interface, while maintaining the same look and feel, in order to demonstrate that with very little effort it is possible to make interaction easier, more efficient, and less frustrating for sightless individuals. After providing a general overview of the project, the paper focuses on interface design and implementation.  相似文献   

12.
At the University of Regina in Saskatchewan, an on-going R & D program, within the Mathesis Group is developing a computer assisted learning system for visually-handicapped University students. The program has grown around use of a speech synthesizer (a relatively-inexpensive, older Votrax unit was available), to provide, initially, a convenient and flexible audio toolfor blind persons tackling BASIC programming assignments. This CAL program has progressed through several stages: convenient Braille flowcharting was developed to assist programming analysis, the synthesized speech support component has evolved from micro-to minicomputer hardware, and now a convenient spelling/dictionary feature is being employed for the writing of essays.  相似文献   

13.
Chan FY  Khalid HM 《Ergonomics》2003,46(13-14):1386-1407
Usability and affective issues of using automatic speech recognition technology to interact with an automated teller machine (ATM) are investigated in two experiments. The first uncovered dialogue patterns of ATM users for the purpose of designing the user interface for a simulated speech ATM system. Applying the Wizard-of-Oz methodology, multiple mapping and word spotting techniques, the speech driven ATM accommodates bilingual users of Bahasa Melayu and English. The second experiment evaluates the usability of a hybrid speech ATM, comparing it with a simulated manual ATM. The aim is to investigate how natural and fun can talking to a speech ATM be for these first-time users. Subjects performed the withdrawal and balance enquiry tasks. The ANOVA was performed on the usability and affective data. The results showed significant differences between systems in the ability to complete the tasks as well as in transaction errors. Performance was measured on the time taken by subjects to complete the task and the number of speech recognition errors that occurred. On the basis of user emotions, it can be said that the hybrid speech system enabled pleasurable interaction. Despite the limitations of speech recognition technology, users are set to talk to the ATM when it becomes available for public use.  相似文献   

14.
Facing the ever growing complexity and usability problems of the PC, some propose specialized computers as a solution, while others argue that such “information appliances” are unnecessary. Rather than pitting information appliances and PCs against each other, we argue instead for exploring the design space in using them together. We experimented with a device teaming approach that takes advantage of both types of devices: the familiar and high bandwidth user interface of the PC, and the task specific form factors of an information appliance. In our experimentation, we designed and developed a phone n’ computer (PnC) by teaming up an IP phone with a general-purpose PC. We outline the design space for such a combination and describe several point designs we created that distribute functions between the two devices according to their characteristics. In comparison to separately using phones and computers, our designs provide new and richer user experiences including Drop-to-Call, sharing visual information, and caller information display.  相似文献   

15.
While many of the existing velocity control techniques are well designed, the techniques are often application-specific, making it difficult to compare their effectiveness. In this paper, we evaluate five known velocity control techniques using the same experimental settings. We compare the techniques based on the assumption that a good travel technique should be easy to learn and easy to use, should cause the user to have few collisions with the VE, should allow the user to complete tasks faster, and should promote better recollection of the environment afterwards. In our experiments, we ask twenty users to use each velocity control technique to navigate through virtual corridors while performing information-gathering tasks. In all cases, the users use pointing to indicate the direction of travel. We then measure the users’ ability to recollect the information they see in the VE, as well as how much time they spend in the VE and how often they collide with the virtual walls. After each test, we use questionnaires to evaluate the ease of learning and ease of use of the velocity control technique, and the users’ sense of presence in the environment. Each of the travel techniques is then evaluated based on the users’ performances in the VE and the results of their questionnaires.  相似文献   

16.
In this paper, an empirical based study is described which has been conducted to gain a deeper understanding of the challenges faced by the visually impaired community when accessing the Web. The study, involving 30 blind and partially sighted computer users, has identified navigation strategies, perceptions of page layout and graphics using assistive devices such as screen readers. Analysis of the data has revealed that current assistive technologies impose navigational constraints and provide limited information on web page layout. Conveying additional spatial information could enhance the exploration process for visually impaired Internet users. It could also assist the process of collaboration between blind and sighted users when performing web-based tasks. The findings from the survey have informed the development of a non-visual interface, which uses the benefits of multimodal technologies to present spatial and navigational cues to the user.  相似文献   

17.
This paper describes our initial effort in developing a trilingual speech interface for financial information inquiries. Our foreign exchange inquiry system consists of: (i) monolingual and trilingual speech recognizers, which receive the user's spoken input in the form of microphone speech; (ii) a real-time data capture component which continuously updates a relational database from a financial data satellite feed; and (iii) a trilingual speech generation component, which generates English and Chinese text based on the raw financial data. The generated text is then transformed into spoken presentations. English text is processed by the FESTIVAL synthesizer system. Chinese text is sent to our syllable-based synthesizer, which employs a concatenative resequencing technique to produce spoken presentations in Putonghua or Cantonese. The speech interface is augmented with a visual display which aims to provide feedback to the user at all times during an interaction. Within the restricted scope of foreign exchange (FOREX), our recognition performance accuracies remain above 93%. Confusions across languages contributed significantly to our recognition errors, but most are confusions between the same currency/country names spoken in different languages. These errors are not detrimental with respect to data retrieval. Our concatenative re-sequencing technique reports the date, time and exchange rates of the input currency pair. A demonstration can be found at http://www.se.cuhk.edu.hk/hccl/demos/.  相似文献   

18.
A shared interactive display (e.g., a tabletop) provides a large space for collaborative interactions. However, a public display lacks a private space for accessing sensitive information. On the other hand, a mobile device offers a private display and a variety of modalities for personal applications, but it is limited by a small screen. We have developed a framework that supports fluid and seamless interactions among a tabletop and multiple mobile devices. This framework can continuously track each user’s action (e.g., hand movements or gestures) on top of a tabletop and then automatically generate a unique personal interface on an associated mobile device. This type of inter-device interactions integrates a collaborative workspace (i.e., a tabletop) and a private area (i.e., a mobile device) with multimodal feedback. To support this interaction style, an event-driven architecture is applied to implement the framework on the Microsoft PixelSense tabletop. This framework hides the details of user tracking and inter-device communications. Thus, interface designers can focus on the development of domain-specific interactions by mapping user’s actions on a tabletop to a personal interface on his/her mobile device. The results from two different studies justify the usability of the proposed interaction.  相似文献   

19.
This paper introduces a novel interface designed to help blind and visually impaired people to explore and navigate on the Web. In contrast to traditionally used assistive tools, such as screen readers and magnifiers, the new interface employs a combination of both audio and haptic features to provide spatial and navigational information to users. The haptic features are presented via a low-cost force feedback mouse allowing blind people to interact with the Web, in a similar fashion to their sighted counterparts. The audio provides navigational and textual information through the use of non-speech sounds and synthesised speech. Interacting with the multimodal interface offers a novel experience to target users, especially to those with total blindness. A series of experiments have been conducted to ascertain the usability of the interface and compare its performance to that of a traditional screen reader. Results have shown the advantages that the new multimodal interface offers blind and visually impaired people. This includes the enhanced perception of the spatial layout of Web pages, and navigation towards elements on a page. Certain issues regarding the design of the haptic and audio features raised in the evaluation are discussed and presented in terms of recommendations for future work.  相似文献   

20.
In this article we present the development of a new, web-based, graphical authentication mechanism called ImagePass. The authentication mechanism introduces a novel feature based on one-time passwords that increases the security of the system without compromising its usability. Regarding usability, we explore the users’ perception of recognition-based, graphical authentication mechanisms in a web environment. Specifically, we investigate whether the memorability of recognition-based authentication keys is influenced by image content. We also examine how the frequency of use affects the usability of the system and whether user training via mnemonic instructions improves the graphical password recognition rate. The design and development process of the proposed system began with a study that assessed how the users remember abstract, face or single-object images, and showed that single-object images have a higher memorability rate. We then proceeded with the design and development of a recognition-based graphical authentication mechanism, ImagePass, which uses single-objects as the image content and follows usable security guidelines. To conclude the research, in a follow-up study we evaluated the performance of 151 participants under different conditions. We discovered that the frequency of use had a great impact on users’ performance, while the users’ gender had a limited task-specific effect. In contrast, user training through mnemonic instructions showed no differences in the users’ authentication metrics. However, a post-study, focus-group analysis revealed that these instructions greatly influenced the users’ perception for memorability and the usability of the graphical authentication. In general, the results of these studies suggest that single-object graphical authentication can be a complementary replacement for traditional passwords, especially in ubiquitous environments and mobile devices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号