首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The effectiveness of four techniques in evaluating the usability of a graphical user interface is presented. The techniques are heuristic evaluation, usability testing, guidelines, and cognitive walkthrough. The techniques are compared as to the number, type, and severity of the problems each could identify for a specific product.<>  相似文献   

2.
For different levels of user performance, different types of information are processed and users will make different types of errors. Based on the error's immediate cause and the information being processed, usability problems can be classified into three categories. They are usability problems associated with skill-based, rule-based and knowledge-based levels of performance. In this paper, a user interface for a Web-based software program was evaluated with two usability evaluation methods, user testing and heuristic evaluation. The experiment discovered that the heuristic evaluation with human factor experts is more effective than user testing in identifying usability problems associated with skill-based and rule-based levels of performance. User testing is more effective than heuristic evaluation in finding usability problems associated with the knowledge-based level of performance. The practical application of this research is also discussed in the paper.  相似文献   

3.
Through the rapid spread of smartphones, users have access to many types of applications similar to those on desktop computer systems. Smartphone applications using augmented reality (AR) technology make use of users' location information. As AR applications will require new evaluation methods, improved usability and user convenience should be developed. The purpose of the current study is to develop usability principles for the development and evaluation of smartphone applications using AR technology. We develop usability principles for smartphone AR applications by analyzing existing research about heuristic evaluation methods, design principles for AR systems, guidelines for handheld mobile device interfaces, and usability principles for the tangible user interface. We conducted a heuristic evaluation for three popularly used smartphone AR applications to identify usability problems. We suggested new design guidelines to solve the identified problems. Then, we developed an improved AR application prototype of an Android-based smartphone, which later was conducted a usability testing to validate the effects of usability principles.  相似文献   

4.
Designing easy to use mobile applications is a difficult task. In order to optimize the development of a usable mobile application, it is necessary to consider the mobile usage context for the design and the evaluation of the user-system interaction of a mobile application. In our research we designed a method that aligns the inspection method “Software ArchitecTure analysis of Usability Requirements realizatioN” SATURN and a mobile usability evaluation in the form of a user test. We propose to use mobile context factors and thus requirements as a common basis for both inspection and user test. After conducting both analysis and user test, the results described as usability problems are mapped and discussed. The mobile context factors identified define and describe the usage context of a mobile application. We exemplify and apply our approach in a case study. This allows us to show how our method can be used to identify more usability problems than with each method separately. Additionally, we could confirm the validity and identified the severity of usability problems found by both methods. Our work presents how a combination of both methods allows to address usability issues in a more holistic way. We argue that the increased quantity and quality of results can lead to a reduction of the number of iterations required in early stages of an iterative software development process.  相似文献   

5.
. We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.  相似文献   

6.
Kwahk J  Han SH 《Applied ergonomics》2002,33(5):419-431
Usability evaluation is now considered an essential procedure in consumer product development. Many studies have been conducted to develop various techniques and methods of usability evaluation hoping to help the evaluators choose appropriate methods. However, planning and conducting usability evaluation requires considerations of a number of factors surrounding the evaluation process including the product, user, activity, and environmental characteristics. In this perspective, this study suggested a new methodology of usability evaluation through a simple, structured framework. The framework was outlined by three major components: the interface features of a product as design variables, the evaluation context consisting of user, product, activity, and environment as context variables, and the usability measures as dependent variables. Based on this framework, this study established methods to specify the product interface features, to define evaluation context, and to measure usability. The effectiveness of this methodology was demonstrated through case studies in which the usability of audiovisual products was evaluated by using the methods developed in this study. This study is expected to help the usability practitioners in consumer electronics industry in various ways. Most directly, it supports the evaluators' plan and conduct usability evaluation sessions in a systematic and structured manner. In addition, it can be applied to other categories of consumer products (such as appliances, automobiles, communication devices, etc.) with minor modifications as necessary.  相似文献   

7.
Abstract

We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.  相似文献   

8.
Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a knowledge base. We conducted a comprehensive comparison study to determine if our theory-based framework and tool could be effectively used to find important usability problems in an interface design, relative to two other established inspection methods (heuristic evaluation and cognitive walkthrough). Results showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures. We also discuss other potential advantages of the UPI over heuristic evaluation and cognitive walkthrough when applied in practice. Potential applications of this work include a cost-effective alternative or supplement to lab-based formative usability evaluation during any stage of development.  相似文献   

9.
In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.  相似文献   

10.
The number of usability problems discovered in a user trial or identified in a heuristic evaluation can never be claimed to be exhaustive. This raises the question of how many usability problems remained undetected. In ergonomics/human factors research this subject matter is often addressed by asking how many participants are sufficient to discover a specific proportion of the usability problems. Current approaches to answer this question suffer from various biasing mechanisms, which undermine the credibility of the popular ‘rule of thumb’ that five participants are sufficient for the discovery of 80% of ‘all’ usability problems. This 5-user rule appears to be speculative in its application as a stop rule. In this paper, I compare actual estimates of the number of usability problems. Underestimation surfaces as a permanent threat. The so-called Turing estimate (CT) appears to be the most satisfactory. However, also CT estimates may suffer from underestimation. Therefore max(CT,CF) with the CF estimate based on partitioned frequencies is proposed as the most adequate estimate of the number of usability problems in the studies presented.  相似文献   

11.
In the past decade, home appliances have been rapidly developed to satisfy the various requirements of users. Thus, there are increases in requirements to explore appropriate evaluation methods that reflect the usability of home appliances quickly and comprehensively. This study aims to develop a scenario‐based usability checklist for product designers early in the design process. In this study, the scenario‐based usability checklist consists of two parts: 1) a heuristic evaluation checklist, and 2) a scenario evaluation checklist. In heuristic evaluation checklist development, usability factors of home appliances are extracted and then coupled to user interface (UI) elements. In scenario evaluation checklist development, scenarios are developed through brainstorming by focus group interview (FGI), and then evaluation elements are extracted from previous scenarios analysis. The proposed scenario‐based usability checklists can enable designers to evaluate product design quickly and comprehensively early in the development process with users' viewpoints. © 2010 Wiley Periodicals, Inc.  相似文献   

12.
. Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.  相似文献   

13.
In this paper I argue that while the notions of thoroughness, efficiency, and validity of problems identified in usability tests are mandatory for researchers seeking to establish the effectiveness of a given testing procedure, especially the notions of thoroughness and efficiency are irrelevant in HCI practice. In research devoted to validating a given test methodology, it is imperative that all usability problems be identified in the product or application used as a test bed. In HCI practice, however, the objective is to identify as many usability problems as possible with limited resources and within a limited time frame, to define and implement solutions to these early in the development process. It is impossible to know whether all usability problems have been identified in a particular test or type of evaluation unless testing is repeated until it reaches an asymptote, a point at which no new problems emerge in a test. Asymptotic testing is not, and should not be, done in practice; it is as unfeasible as it is irrelevant in a work context. In the absence of a complete usability problem set, the notions of thoroughness and efficiency are meaningless and also impossible to calculate. While validity can be assessed for individual problems in practical usability tests, it cannot yield information about the effectiveness of the testing method per se because the problem set is unlikely to be complete. An example is provided to support my argument.

Relevance to industry

The point of a usability test is to identify as many usability problems as possible, with limited resources. The resulting problems hopefully include the most severe ones, but not the entire problem set. While HCI practitioners should know about thoroughness, efficiency, and validity of the test methods they elect to employ, they should not attempt to assess these from their test findings. Neither thoroughness nor efficiency can be determined from an incomplete problem set, and the notion of validity is tied to the severity of individual usability problems, not to the testing method as such.  相似文献   


14.
张丽霞  梁华坤  傅熠  宋鸿陟 《计算机教育》2010,(14):136-140,158
文章综述用户模型法、用户调查法、专家评审法和观察法4类可用性测试方法,根据可用性测试的经验,总结出一套合理实用的可用性测试过程,使得可用性测试的效率高、代价低,尽可能发现更多的可用性问题;提出在程序设计类课程的教学中,应适当向学生介绍可用性测试的方法和过程,以培养学生对自己开发的软件系统进行可用性测试的能力。  相似文献   

15.
Abstract

Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.  相似文献   

16.
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.  相似文献   

17.
18.
Considering the importance associated with e-commerce website accessibility and usability, a study on one of the most relevant Portuguese e-commerce websites has been performed using both automatic and manual assessment procedures. In an initial stage, we evaluated the chosen website with a Web accessibility and usability automatic tool called SortSite; after that, we performed a manual evaluation to verify each previously detected error and present possible solutions to overcome those faults. In a third phase, three usability specialists have been used to perform a heuristic evaluation of the chosen website. Finally, user tests with blind people were carried out in order to fully assess the compliance with accessibility and usability guidelines and standards. The results showed that the platform had a good score regarding the automatic evaluation; however, when the heuristic and manual evaluations were performed, some accessibility and usability problems were discovered. Moreover, the user test results showed bad marks regarding efficiency, effectiveness, and satisfaction by the group of participants. As a conclusion, we highlight user interaction problems and propose seven recommendations focused on enhancing accessibility and usability of not only the evaluated e-commerce website, but also of other similar ones.  相似文献   

19.
Perspective-based Usability Inspection: An Empirical Validation of Efficacy   总被引:1,自引:0,他引:1  
Inspection is a fundamental means of achieving software usability. Past research showed that the current usability inspection techniques were rather ineffective. We developed perspective-based usability inspection, which divides the large variety of usability issues along different perspectives and focuses each inspection session on one perspective. We conducted a controlled experiment to study its effectiveness, using a post-test only control group experimental design, with 24 professionals as subjects. The control group used heuristic evaluation, which is the most popular technique for usability inspection. The experimental design and the results are presented, which show that inspectors applying perspective-based inspection not only found more usability problems related to their assigned perspectives, but also found more overall problems. Perspective-based inspection was shown to be more effective for the aggregated results of multiple inspectors, finding about 30% more usability problems for 3 inspectors. A management implication of this study is that assigning inspectors more specific responsibilities leads to higher performance. Internal and external threats to validity are discussed to help better interpret the results and to guide future empirical studies.  相似文献   

20.
Many efforts to improve the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both of these approaches depend on a complete division of work between developers and evaluators. This article takes a different approach by exploring whether software developers and designers can be trained to conduct their own usability evaluations. The article is based on an empirical study where 36 teams with a total of 234 first-year university students on software development and design educations were trained through an introductory course in user-based website usability testing that was taught in 40 h. They used the techniques from this course for planning, conducting, and interpreting the results of a usability evaluation of an interactive website. They gained good competence in conducting the evaluation, defining user tasks and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号