首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
Abstract

We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.  相似文献   

2.
Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work on UEMs. The dogmas include using inadequate procedures and measures for assessment, focusing on win–lose outcomes, holding simplistic models of how usability evaluators work, concentrating on evaluation rather than on design and working from the assumption that usability problems are real. We discuss research approaches that may help move beyond the dogmas. In particular, we emphasise detailed studies of evaluation processes, assessments of the impact of UEMs on design carried out in real-world systems development and analyses of how UEMs may be combined.  相似文献   

3.
Usability tests are a part of user-centered design. Usability testing with disabled people is necessary, if they are among the potential users. Several researchers have already investigated usability methods with sighted people. However, research with blind users is insufficient, for example, due to different knowledge on the use of assistive technologies and the ability to analyze usability issues from inspection of non-visual output of assistive devices. From here, the authors aspire to extend theory and practice by investigating four usability methods involving the blind, visually impaired and sighted people. These usability methods comprise of local test, synchronous remote test, tactile paper prototyping and computer-based prototyping. In terms of effectiveness of evaluation and the experience of participants and the facilitator, local tests were compared with synchronous remote tests and tactile paper prototyping with computer-based prototyping. Through the comparison of local and synchronous remote tests, it has been found that the number of usability problems uncovered in different categories with both approaches was comparable. In terms of task completion time, there is a significant difference for blind participants, but not for the visually impaired and sighted. Most of the blind and visually impaired participants prefer the local test. As for the comparison of tactile paper prototyping and computer-based prototyping, it has been revealed that tactile paper prototyping provides a better overview of an application while the interaction with computer-based prototypes is closer to reality. Problems regarding the planning and conducting of these methods as they arise in particular with blind people were also discussed. Based on the authors’ experiences, recommendations were provided for dealing with these problems from both the technical and the organization perspectives.  相似文献   

4.
Abstract

Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.  相似文献   

5.
An Application Programming Interface (API) provides a programmatic interface to a software component that is often offered publicly and may be used by programmers who are not the API’s original designers. APIs play a key role in software reuse. By reusing high quality components and services, developers can increase their productivity and avoid costly defects. The usability of an API is a qualitative characteristic that evaluates how easy it is to use an API. Recent years have seen a considerable increase in research efforts aiming at evaluating the usability of APIs. An API usability evaluation can identify problem areas and provide recommendations for improving the API. In this systematic mapping study, we focus on 47 primary studies to identify the aim and the method of the API usability studies. We investigate which API usability factors are evaluated, at which phases of API development is the usability of API evaluated and what are the current limitations and open issues in API usability evaluation. We believe that the results of this literature review would be useful for both researchers and industry practitioners interested in investigating the usability of API and new API usability evaluation methods.  相似文献   

6.
. Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.  相似文献   

7.
The importance of evaluating the usability of e-commerce websites is well recognised. User testing and heuristic evaluation methods are commonly used to evaluate the usability of such sites, but just how effective are these for identifying specific problems? This article describes an evaluation of these methods by comparing the number, severity and type of usability problems identified by each one. The cost of employing these methods is also considered. The findings highlight the number and severity level of 44 specific usability problem areas which were uniquely identified by either user testing or heuristic evaluation methods, common problems that were identified by both methods, and problems that were missed by each method. The results show that user testing uniquely identified major problems related to four specific areas and minor problems related to one area. Conversely, the heuristic evaluation uniquely identified minor problems in eight specific areas and major problems in three areas.  相似文献   

8.
This paper reports on a study assessing the consistency of usability testing across organisations. Nine independent organisations evaluated the usability of the same website, Microsoft Hotmail. The results document a wide difference in selection and application of methodology, resources applied, and problems reported. The organizations reported 310 different usability problems. Only two problems were reported by six or more organizations, while 232 problems (75%) were uniquely reported, that is, no two teams reported the same problem. Some of the unique findings were classified as serious. Even the tasks used by most or all teams produced very different results - around 70% of the findings for each of these tasks were unique. Our main conclusion is that our simple assumption that we are all doing the same and getting the same results in a usability test is plainly wrong.  相似文献   

9.
This paper reports on a study assessing the consistency of usability testing across organisations. Nine independent organisations evaluated the usability of the same website, Microsoft Hotmail. The results document a wide difference in selection and application of methodology, resources applied, and problems reported. The organizations reported 310 different usability problems. Only two problems were reported by six or more organizations, while 232 problems (75%) were uniquely reported, that is, no two teams reported the same problem. Some of the unique findings were classified as serious. Even the tasks used by most or all teams produced very different results – around 70% of the findings for each of these tasks were unique. Our main conclusion is that our simple assumption that we are all doing the same and getting the same results in a usability test is plainly wrong.  相似文献   

10.
《Ergonomics》2012,55(14):1021-1025
  相似文献   

11.
Baber C 《Ergonomics》2002,45(14):1021-5; discussion 1042-6
  相似文献   

12.
Multimedia Tools and Applications - In the last years, there has been an increasing interest in the design of video games as a tool for education, training, health promotion, socialization, etc....  相似文献   

13.
14.
15.
Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a knowledge base. We conducted a comprehensive comparison study to determine if our theory-based framework and tool could be effectively used to find important usability problems in an interface design, relative to two other established inspection methods (heuristic evaluation and cognitive walkthrough). Results showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures. We also discuss other potential advantages of the UPI over heuristic evaluation and cognitive walkthrough when applied in practice. Potential applications of this work include a cost-effective alternative or supplement to lab-based formative usability evaluation during any stage of development.  相似文献   

16.
We discuss the impact of cultural differences on usability evaluations that are based on the thinking-aloud method (TA). The term ‘cultural differences’ helps distinguish differences in the perception and thinking of Westerners (people from Western Europe and US citizens with European origins) and Easterners (people from China and the countries heavily influenced by its culture). We illustrate the impact of cultural cognition on four central elements of TA: (1) instructions and tasks, (2) the user’s verbalizations, (3) the evaluator’s reading of the user, and (4) the overall relationship between user and evaluator. In conclusion, we point to the importance of matching the task presentation to users’ cultural background, the different effects of thinking aloud on task performance between Easterners and Westerners, the differences in nonverbal behaviour that affect usability problem detection, and, finally, the complexity of the overall relationship between a user and an evaluator with different cultural backgrounds.  相似文献   

17.
The main goal of the work is to propose a method to evaluate user interfaces using task models and logs generated from a user test of an application. The method can be incorporated into an automatic tool which gives the designer information useful to evaluate and improve the user interface. These results include an analysis of the tasks which have been accomplished, those which failed and those never tried, user errors and their type, time related information, task patterns among the accomplished tasks, and the available tasks from the current state of the user session. This information is also useful to an evaluator checking whether the specified usability goals have been accomplished  相似文献   

18.
The paper motivates the need to acquire methodological knowledge for involving children as test users in usability testing. It introduces a methodological framework for delineating comparative assessments of usability testing methods for children participants. This framework consists in three dimensions: (1) assessment criteria for usability testing methods, (2) characteristics describing usability testing methods and, finally, (3) characteristics of children that may impact upon the process and the result of usability testing. Two comparative studies are discussed in the context of this framework along with implications for future research.  相似文献   

19.
Progress in the field of e-learning has been slow, with related problems mainly associated with the poor design of e-learning systems. Moreover, because of a depreciated importance of usability, usability studies are not very frequent. This paper reports the experience with the usability assessment of intelligent learning and teaching systems which are based on TEx-Sys model and are intended to enhance the process of knowledge acquisition in daily classroom settings. The applied scenario-based usability evaluation, as a combination of behaviour and opinion based measurements, enabled to quantify usability in terms of users’ (teachers’ and students’) performance and satisfaction. According to the achieved results, the main directions for interface redesign are offered. The acquired experience indicates that useful usability assessments with a significant identification of interface limitations can be performed quite easily and quickly. On the other hand, it raised a series of questions which, in order to be clarified, require further comprehensive research, the more so if the employment of universal design within e-learning context is considered.  相似文献   

20.
There has been a rapid increase in research evaluating usability of Augmented Reality (AR) systems in recent years. Although many different styles of evaluation are used, there is no clear consensus on the most relevant approaches. We report a review of papers published in International Symposium of Mixed and Augmented Reality (ISMAR) proceedings in the past decade, building on the previous work of Swan and Gabbard (2005). Firstly, we investigate the evaluation goal, measurement and method of ISMAR papers according to their usability research in four categories: performance, perception and cognition, collaboration and User Experience (UX). Secondly, we consider the balance of evaluation approaches with regard to empirical–analytical, quantitative–qualitative and participant demographics. Finally we identify potential emphases for usability study of AR systems in the future. These analyses provide a reference point for current evaluation techniques, trends and challenges, which benefit researchers intending to design, conduct and interpret usability evaluations for future AR systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号