首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
The smartphone market is nowadays highly competitive. When buying a new device, users focus on visual esthetics, ergonomics, performance, and user experience, among others. Assessing usability issues allows improving these aspects. One popular method for detecting usability problems is heuristic evaluation, in which evaluators employ a set of usability heuristics as guide. Using proper heuristics is highly relevant. In this paper we present SMASH, a set of 12 usability heuristics for smartphones and mobile applications, developed iteratively. SMASH (previously named TMD: Usability heuristics for Touchscreen-based Mobile Devices) was experimentally validated. The results support its utility and effectiveness.  相似文献   

2.
Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a knowledge base. We conducted a comprehensive comparison study to determine if our theory-based framework and tool could be effectively used to find important usability problems in an interface design, relative to two other established inspection methods (heuristic evaluation and cognitive walkthrough). Results showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures. We also discuss other potential advantages of the UPI over heuristic evaluation and cognitive walkthrough when applied in practice. Potential applications of this work include a cost-effective alternative or supplement to lab-based formative usability evaluation during any stage of development.  相似文献   

3.
Although various methods exist for performing usability evaluation, they lack a systematic framework for guiding and structuring the assessment and reporting activities. Consequently, analysis and reporting of usability data are ad hoc and do not live up to their potential in cost effectiveness, and usability engineering support tools are not well integrated. We developed the User Action Framework, a structured knowledge base of usability concepts and issues, as a framework on which to build a broad suite of usability engineering support tools. The User Action Framework helps to guide the development of each tool and to integrate the set of tools in the practitioner's working environment. An important characteristic of the User Action Framework is its own reliability in term of consistent use by practitioners. Consistent understanding and reporting of the underlying causes of usability problems are requirements for cost-effective analysis and redesign. Thus, high reliability in terms of agreement by users on what the User Action Framework means and how it is used is essential for its role as a common foundation for the tools. Here we describe how we achieved high reliability in the User Action Framework, and we support the claim with strongly positive results of a summative reliability study conducted to measure agreement among 10 usability experts in classifying 15 different usability problems. Reliability data from the User Action Framework are also compared to data collected from nine of the same usability experts using a classic heuristic evaluation technique.  相似文献   

4.
针对目前车载计算机系统可用性研究中缺少可用性评估指标体系的问题,进行了深入地分析与研究,首先研究了构建车载计算机系统可用性评估体系的基本原则,分析了可用性评估的主要内容,最后提出了可用性评估的指标体系.研究结果表明:可用性评价一方面能从用户角度反映车载计算机系统存在的问题,为设计者决策提供依据,另一方面也能为供应商改进系统性能提供参考.  相似文献   

5.
Evaluating e-learning systems is a complex activity which requires considerations of several criteria addressing quality in use as well as educational quality. Heuristic evaluation is a widespread method for usability evaluation, yet its output is often prone to subjective variability, primarily due to the generality of many heuristics. This paper presents the pattern-based (PB) inspection, which aims at reducing this drawback by exploiting a set of evaluation patterns to systematically drive inspectors in their evaluation activities. The application of PB inspection to the evaluation of e-learning systems is reported in this paper together with a study that compares this method to heuristic evaluation and user testing. The study involved 73 novice evaluators and 25 end users, who evaluated an e-learning application using one of the three techniques. The comparison metric was defined along six major dimensions, covering concepts of classical test theory and pragmatic aspects of usability evaluation. The study showed that evaluation patterns, capitalizing on the reuse of expert evaluators know-how, provide a systematic framework which reduces reliance on individual skills, increases inter-rater reliability and output standardization, permits the discovery of a larger set of different problems and decreases evaluation cost. Results also indicated that evaluation in general is strongly dependent on the methodological apparatus as well as on judgement bias and individual preferences of evaluators, providing support to the conceptualisation of interactive quality as a subjective judgement, recently brought forward by the UX research agenda.  相似文献   

6.
As the diversity of services in the financial market increases, it is critical to design usable banking software in order to overcome the complex structure of the system. The current study presents a usability guideline based on heuristics and their corresponding criteria that could be used during the early stages of banking software design process. In the design of a usability guideline, the heuristics and their criteria are categorized in terms of their effectiveness in solving usability problems grouped and ranging from usability catastrophe to cosmetic problems. The current study comprises of three main steps: First, actual usability problems from three banking software development projects are categorized according to their severity level. Secondly, usability criteria are rated for how well they explain the usability problems encountered. Finally, usability heuristics are categorized according to the severity level of usability problems through two analytical models; corresponding and cluster analyses. As the result, designers and project managers may give more importance to the heuristics related with the following usability problem categories: Usability catastrophe and then major usability problems. Furthermore, the proposed guideline can be used to understand which usability criteria would be helpful in explaining usability problems as well as preventing banking system catastrophes, by highlighting the critical parts in system design of banking software.  相似文献   

7.
As a mobile phone has various advanced functionalities or features, usability issues are increasingly challenging. Due to the particular characteristics of a mobile phone, typical usability evaluation methods and heuristics, most of which are relevant to a software system, might not effectively be applied to a mobile phone. Another point to consider is that usability evaluation activities should help designers find usability problems easily and produce better design solutions. To support usability practitioners of the mobile phone industry, we propose a framework for evaluating the usability of a mobile phone, based on a multi-level, hierarchical model of usability factors, in an analytic way. The model was developed on the basis of a set of collected usability problems and our previous study on a conceptual framework for identifying usability impact factors. It has multi-abstraction levels, each of which considers the usability of a mobile phone from a particular perspective. As there are goal-means relationships between adjacent levels, a range of usability issues can be interpreted in a holistic as well as diagnostic way. Another advantage is that it supports two different types of evaluation approaches: task-based and interface-based. To support both evaluation approaches, we developed four sets of checklists, each of which is concerned, respectively, with task-based evaluation and three different interface types: Logical User Interface (LUI), Physical User Interface (PUI) and Graphical User Interface (GUI). The proposed framework specifies an approach to quantifying usability so that several usability aspects are collectively measured to give a single score with the use of the checklists. A small case study was conducted in order to examine the applicability of the framework and to identify the aspects of the framework to be improved. It showed that it could be a useful tool for evaluating the usability of a mobile phone. Based on the case study, we improved the framework in order that usability practitioners can use it more easily and consistently.  相似文献   

8.
With the functional revolution in modern cars, evaluation methods to be used in all phases of driver–car interaction design have gained importance. It is crucial for car manufacturers to discover and solve safety issues early in the interaction design process. A current problem is thus to find a correlation between the formative methods that are used during development and the summative methods that are used when the product has reached the customer. This paper investigates the correlation between efficiency metrics from summative and formative evaluations, where the results of two studies on sound and navigation system tasks are compared. The first, an analysis of the J.D. Power and Associates APEAL survey, consists of answers given by about two thousand customers. The second, an expert evaluation study, was done by six evaluators who assessed the layouts by task completion time, TLX and Nielsen heuristics. The results show a high degree of correlation between the studies in terms of task efficiency, i.e. between customer ratings and task completion time, and customer ratings and TLX. However, no correlation was observed between Nielsen heuristics and customer ratings, task completion time or TLX. The results of the studies introduce a possibility to develop a usability evaluation framework that includes both formative and summative approaches, as the results show a high degree of consistency between the different methodologies. Hence, combining a quantitative approach with the expert evaluation method, such as task completion time, should be more useful for driver–car interaction design.  相似文献   

9.
A heuristic evaluation method allows the evaluation of the usability of application domains. To evaluate applications that have specific domain features, researchers can use sets of specific usability heuristics in addition to the well-known (usually Nielsen's) heuristics. Heuristics can also focus on the User eXperience (UX) aspects other than the usability. In a previous work, we proposed a formal methodology for establishing usability/UX heuristics. The methodology has 8 stages including activities to formulate, specify, validate and refine a new set of heuristics for a specific application domain. The methodology was validated through expert opinion and several case studies. Although when specifying the methodology, we explained each of its stages in detail, some activities can be difficult to perform without a guide that helps the researcher determine how the stages should be carried out. This article presents a detailed explanation regarding how to apply each stage of the methodology to create a new set of heuristics for a specific domain. Additionally, this paper explains how to iterate the methodology's stages and when to stop the process of developing new heuristics.  相似文献   

10.
11.
Heuristic evaluation is one of the most actively used techniques for analyzing usability, as it is quick and inexpensive. This technique is based on following a given set of heuristics, which are typically defined as broad rules of thumb. In this paper, we propose a systematic and generalizable approach to this type of evaluation based on using comprehensive taxonomies as a source for the heuristics. This approach contrasts with other typical approaches, such as following (or adapting) Jakob Nielsen’s heuristics or creating ad hoc heuristics (formally or informally). The usefulness of our approach is investigated in two ways. Firstly, we carry out an actual heuristic evaluation of a mobile app in this manner, which we describe in detail. Secondly, we compare our approach and Nielsen’s. Additionally, we identify some limitations in Nielsen’s heuristics and some inconsistencies between them and established usability models, including Nielsen’s own.  相似文献   

12.

This study was conducted to compare CHE between Human-Computer Interaction (HCI) experts and novices in evaluating the Smartphone app for the cultural heritage site. It uses the Smartphone Mobile Application heuRisTics (SMART), focusing on smartphone applications and traditional Nielsen heuristics, focusing on a wider range of interactive systems. Six experts and six novices used the severity rating scale to categorise the severity of the usability issues. These issues were mapped to both heuristics. The study found that experts and novice evaluators identified 19 and 14 usability issues, respectively, with ten as the same usability issues. However, these same usability issues have been rated differently. Although the t-test indicates no significant differences between experts and novices in their ratings for usability issues, these results nevertheless indicate the need for both evaluators in CHE to provide a more comprehensive perspective on the severity of the usability issues. Furthermore, the mapping of the usability issues for Nielsen and SMART heuristics concluded that more issues with the smartphone app could be addressed through smartphone-specific heuristics than general heuristics, indicating a better tool for heuristic evaluation of the smartphone app. This study also provides new insight into the required number of evaluators needed for CHE.

  相似文献   

13.
MDD tools are very useful to draw conceptual models and to automate code generation. Even though this would bring many benefits, wide adoption of MDD tools is not yet a reality. Various research activities are being undertaken to find why and to provide the required solutions. However, insufficient research has been done on a key factor for the acceptance of MDD tools: usability. With the help of end-users, this paper presents a framework to evaluate the usability of MDD tools. The framework will be used as a basis for a family of experiments to get clear insights into the barriers to usability that prevent MDD tools from being widely adopted in industry. To illustrate the applicability of our framework, we instantiated it for performing a usability evaluation of a tool named INTEGRANOVA. Furthermore, we compared the outcome of the study with another usability evaluation technique based on ergonomic criteria.  相似文献   

14.
Heuristic evaluation is one of the most widely-used methods for evaluating the usability of a software product. Proposed in 1990 by Nielsen and Molich, it consists in having a small group of evaluators performing a systematic revision of a system under a set of guiding principles known as usability heuristics. Although Nielsen’s 10 usability heuristics are used as the de facto standard in the process of heuristic evaluation, recent research has provided evidence not only for the need of custom domain specific heuristics, but also for the development of methodological processes to create such sets of heuristics. In this work we apply the PROMETHEUS methodology, recently proposed by the authors, to develop the VLEs heuristics: a novel set of usability heuristics for the domain of virtual learning environments. In addition to the development of these heuristics, our research serves as further empirical validation of PROMETHEUS. To validate our results we performed an heuristic evaluation using both VLEs and Nielsen’s heuristics. Our design explicitly controls the effect of evaluator variability by using a large number of evaluators. Indeed, for both sets of heuristics the evaluation was performed independently by 7 groups of 5 evaluators each. That is, there were 70 evaluators in total, 35 using VLEs and 35 using Nielsen’s heuristics. In addition, we perform rigorous statistical analyses to establish the validity of the novel VLEs heuristics. The results show that VLEs perform better than Nielsen’s heuristics, finding more problems, which are also more relevant to the domain, as well as satisfying other quantitative and qualitative criteria. Finally, in contrast to evaluators using Nielsen’s heuristics, evaluators using VLEs heuristics reported greater satisfaction regarding utility, clarity, ease of use, and need of additional elements.  相似文献   

15.
Websites that are usable and accessible can have a positive impact on the overall user experience. Usability Inspection Methods (UIMs) can be applied to evaluate and measure the usability. The current research in the fields of Web Accessibility and Human –Computer Interaction (HCI) is in need of additional UIMs that can be applied to also measure the accessibility, in addition to the usability alone. In this article, a novel UIM in the form of a heuristic evaluation is presented. The heuristic evaluation aims to support HCI experts and Web developers in designing and evaluating websites that provide positive user experiences to users who are deaf. This article discusses the development of the Heuristic Evaluation for Deaf Web User Experience (HE4DWUX). Following an iteration cycle, version 2 of the HE4DWUX is presented in Appendix A. An existing three-phase process to develop heuristics for specific application domains was applied to construct the HE4DWUX. The outcome of this research is 12 heuristics, with each containing its own set of checklist items to operationalize its applicability in measuring the Web user experience for users who are deaf. The heuristics and their checklist items can identify important aspects of design that will impact the Web user experience for this particular user group.  相似文献   

16.
This paper describes a heuristic creation process based on the notion of critical parameters, and a comparison experiment that demonstrates the utility of heuristics created for a specific system class. We focus on two examples of using the newly created heuristics to illustrate the utility of the usability evaluation method, as well as to provide support for the creation process, and we report on successes and frustrations of two classes of users, novice evaluators and domain experts, who identified usability problems with the new heuristics. We argue that establishing critical parameters for other domains will support efforts in creating tailored evaluation tools.  相似文献   

17.
In this paper I argue that while the notions of thoroughness, efficiency, and validity of problems identified in usability tests are mandatory for researchers seeking to establish the effectiveness of a given testing procedure, especially the notions of thoroughness and efficiency are irrelevant in HCI practice. In research devoted to validating a given test methodology, it is imperative that all usability problems be identified in the product or application used as a test bed. In HCI practice, however, the objective is to identify as many usability problems as possible with limited resources and within a limited time frame, to define and implement solutions to these early in the development process. It is impossible to know whether all usability problems have been identified in a particular test or type of evaluation unless testing is repeated until it reaches an asymptote, a point at which no new problems emerge in a test. Asymptotic testing is not, and should not be, done in practice; it is as unfeasible as it is irrelevant in a work context. In the absence of a complete usability problem set, the notions of thoroughness and efficiency are meaningless and also impossible to calculate. While validity can be assessed for individual problems in practical usability tests, it cannot yield information about the effectiveness of the testing method per se because the problem set is unlikely to be complete. An example is provided to support my argument.

Relevance to industry

The point of a usability test is to identify as many usability problems as possible, with limited resources. The resulting problems hopefully include the most severe ones, but not the entire problem set. While HCI practitioners should know about thoroughness, efficiency, and validity of the test methods they elect to employ, they should not attempt to assess these from their test findings. Neither thoroughness nor efficiency can be determined from an incomplete problem set, and the notion of validity is tied to the severity of individual usability problems, not to the testing method as such.  相似文献   


18.
Kwahk J  Han SH 《Applied ergonomics》2002,33(5):419-431
Usability evaluation is now considered an essential procedure in consumer product development. Many studies have been conducted to develop various techniques and methods of usability evaluation hoping to help the evaluators choose appropriate methods. However, planning and conducting usability evaluation requires considerations of a number of factors surrounding the evaluation process including the product, user, activity, and environmental characteristics. In this perspective, this study suggested a new methodology of usability evaluation through a simple, structured framework. The framework was outlined by three major components: the interface features of a product as design variables, the evaluation context consisting of user, product, activity, and environment as context variables, and the usability measures as dependent variables. Based on this framework, this study established methods to specify the product interface features, to define evaluation context, and to measure usability. The effectiveness of this methodology was demonstrated through case studies in which the usability of audiovisual products was evaluated by using the methods developed in this study. This study is expected to help the usability practitioners in consumer electronics industry in various ways. Most directly, it supports the evaluators' plan and conduct usability evaluation sessions in a systematic and structured manner. In addition, it can be applied to other categories of consumer products (such as appliances, automobiles, communication devices, etc.) with minor modifications as necessary.  相似文献   

19.
Domain-specific searching heuristics is greatly influential upon the searching efficiency of robot action planning (RAP), but its computer-realized recognition and acquisition, i.e., learning, is difficult. This paper makes an exploration into this challenge. First, a problem formulation of RAP is made. Then, by applying explanation-based learning, which is currently the only approach to acquiring domain-specific searching heuristics, a new learning based method is developed for RAP, named robot action planning via explanation-based learning (RAPEL). Finally, an example study demonstrates the effectiveness of RAPEL  相似文献   

20.
New heuristics and strategies have enabled major advancements in SAT solving in recent years. However, experimentation has shown that there is no winning solution that works in all cases. A degradation of orders of magnitude can be observed if the wrong heuristic is chosen. The problem is that it is impossible to know, in advance, which heuristics are best for a given problem. Consequently, many ideas - those that turn out to be useful for a small subset of the cases, but significantly increase run times on most others - are discarded.We propose the notion of Adaptive Solving as a possible solution to this problem. In our framework, the SAT solver monitors the effectiveness of the search on-the-fly using a Performance Metric. The metric gives a score according to its assessment of the search progress. Based on this score, one or more heuristics are turned on or off. The goal is to use a specific heuristic or strategy when it is advantageous, and turn it off when it is not, before it does too much damage. We suggest several possible metrics, and compare their effectiveness. Our adaptive solver achieves significant speedups on a large set of examples. We also show that applying different heuristics on different parts of the search space can improve run times even beyond what can be achieved by the best heuristic on its own.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号