首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.

Context

In recent years, many usability evaluation methods (UEMs) have been employed to evaluate Web applications. However, many of these applications still do not meet most customers’ usability expectations and many companies have folded as a result of not considering Web usability issues. No studies currently exist with regard to either the use of usability evaluation methods for the Web or the benefits they bring.

Objective

The objective of this paper is to summarize the current knowledge that is available as regards the usability evaluation methods (UEMs) that have been employed to evaluate Web applications over the last 14 years.

Method

A systematic mapping study was performed to assess the UEMs that have been used by researchers to evaluate Web applications and their relation to the Web development process. Systematic mapping studies are useful for categorizing and summarizing the existing information concerning a research question in an unbiased manner.

Results

The results show that around 39% of the papers reviewed reported the use of evaluation methods that had been specifically crafted for the Web. The results also show that the type of method most widely used was that of User Testing. The results identify several research gaps, such as the fact that around 90% of the studies applied evaluations during the implementation phase of the Web application development, which is the most costly phase in which to perform changes. A list of the UEMs that were found is also provided in order to guide novice usability practitioners.

Conclusions

From an initial set of 2703 papers, a total of 206 research papers were selected for the mapping study. The results obtained allowed us to reach conclusions concerning the state-of-the-art of UEMs for evaluating Web applications. This allowed us to identify several research gaps, which subsequently provided us with a framework in which new research activities can be more appropriately positioned, and from which useful information for novice usability practitioners can be extracted.  相似文献   

2.
Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work on UEMs. The dogmas include using inadequate procedures and measures for assessment, focusing on win–lose outcomes, holding simplistic models of how usability evaluators work, concentrating on evaluation rather than on design and working from the assumption that usability problems are real. We discuss research approaches that may help move beyond the dogmas. In particular, we emphasise detailed studies of evaluation processes, assessments of the impact of UEMs on design carried out in real-world systems development and analyses of how UEMs may be combined.  相似文献   

3.
The current variety of alternative approaches to usability evaluation methods (UEMs) designed to assess and improve usability in software systems is offset by a general lack of understanding of the capabilities and limitations of each. Practitioners need to know which methods are more effective and in what ways and for what purposes. However, UEMs cannot be evaluated and compared reliably because of the lack of standard criteria for comparison. In this article, we present a practical discussion of factors, comparison criteria, and UEM performance measures useful in studies comparing UEMs. In demonstrating the importance of developing appropriate UEM evaluation criteria, we offer operational definitions and possible measures of UEM performance. We highlight specific challenges that researchers and practitioners face in comparing UEMs and provide a point of departure for further discussion and refinement of the principles and techniques used to approach UEM evaluation and comparison.  相似文献   

4.
Computer professionals have a need for robust, easy-to-use usability evaluation methods (UEMs) to help them systematically improve the usability of computer artifacts. However, cognitive walkthrough (CW), heuristic evaluation (HE), and thinking- aloud study (TA)-3 of the most widely used UEMs-suffer from a substantial evaluator effect in that multiple evaluators evaluating the same interface with the same UEM detect markedly different sets of problems. A review of 11 studies of these 3 UEMs reveals that the evaluator effect exists for both novice and experienced evaluators, for both cosmetic and severe problems, for both problem detection and severity assessment, and for evaluations of both simple and complex systems. The average agreement between any 2 evaluators who have evaluated the same system using the same UEM ranges from 5% to 65%, and no 1 of the 3 UEMs is consistently better than the others. Although evaluator effects of this magnitude may not be surprising for a UEM as informal as HE, it is certainly notable that a substantial evaluator effect persists for evaluators who apply the strict procedure of CW or observe users thinking out loud. Hence, it is highly questionable to use a TA with 1 evaluator as an authoritative statement about what problems an interface contains. Generally, the application of the UEMs is characterized by (a) vague goal analyses leading to variability in the task scenarios, (b) vague evaluation procedures leading to anchoring, or (c) vague problem criteria leading to anything being accepted as a usability problem, or all of these. The simplest way of coping with the evaluator effect, which cannot be completely eliminated, is to involve multiple evaluators in usability evaluations.  相似文献   

5.
The importance of user-centred evaluation is stressed by HCI academics and practitioners alike. However, there have been few recent evaluation studies of User Evaluation Methods (UEMs), especially those with the aim of improving methods rather than assessing their efficacy (i.e. formative rather than summative evaluations). In this article, we present formative evaluations of two new methods for assessing the functionality and usability of a particular type of interactive system—electronic information resources. These serve as an example of an evaluation approach for assessing the success of new HCI methods. We taught the methods to a group of electronic resource developers and collected a mixture of focus group, method usage and summary questionnaire data—all focusing on how useful, usable and learnable the developers perceived the methods to be and how likely they were to use them in the future. Findings related to both methods were generally positive, and useful suggestions for improvement were made. Our evaluation sessions also highlighted a number of trade-offs for the development of UEMs and general lessons learned, which we discuss in order to inform the future development and evaluation of HCI methods.  相似文献   

6.
We focus on the ability of two analytical usability evaluation methods (UEMs), namely CASSM (Concept-based Analysis for Surface and Structural Misfits) and Cognitive Walkthrough, to identify usability issues underlying the use made of two London Underground ticket vending machines. By setting both sets of issues against the observed interactions with the machines, we assess the similarities and differences between the issues depicted by the two methods. In so doing we de-emphasise the mainly quantitative approach which is typical of the comparative UEM literature. However, by accounting for the likely consequences of the issues in behavioural terms, we reduced the proportion of issues which were anticipated but not observed (the false positives), compared with that achieved by other UEM studies. We assess these results in terms of the limitations of problem count as a measure of UEM effectiveness. We also discuss the likely trade-offs between field studies and laboratory testing.  相似文献   

7.
《Ergonomics》2012,55(7):609-625
In-vehicle information systems (IVIS) can be controlled by the user via direct or indirect input devices. In order to develop the next generation of usable IVIS, designers need to be able to evaluate and understand the usability issues associated with these two input types. The aim of this study was to investigate the effectiveness of a set of empirical usability evaluation methods for identifying important usability issues and distinguishing between the IVIS input devices. A number of usability issues were identified and their causal factors have been explored. These were related to the input type, the structure of the menu/tasks and hardware issues. In particular, the translation between inputs and on-screen actions and a lack of visual feedback for menu navigation resulted in lower levels of usability for the indirect device. This information will be useful in informing the design of new IVIS, with improved usability.

Statement of Relevance: This paper examines the use of empirical methods for distinguishing between direct and indirect IVIS input devices and identifying usability issues. Results have shown that the characteristics of indirect input devices produce more serious usability issues, compared with direct devices and can have a negative effect on the driver–vehicle interaction.  相似文献   

8.
Abstract

As a result of the importance of the usability approach in system development and the EC's ‘Directive concerning the minimum safety and health requirements for VDT workers’ (EWG 1990), there is an accepted need for practical evaluation methods for user interfaces. The usability approach and the EC Directive are not restricted to user interface design, as they include the design of appropriate hardware and software, as well as organization, job, and task design. Therefore system designers are faced with many, often conflicting, requirements and need to address the question, ‘How can usability requirements comprehensively be considered and evaluated in system development?’ Customers buying hardware and software and introducing them into their organization ask, (How can I select easy-to-use hardware and software?’ Both designers and customers need an evaluation procedure that covers all the organizational, user, hard- and software requirements. The evaluation method, EVADIS.II, we present in this paper overcomes characteristic deficiencies of previous evaluation methods. In particular, it takes the tasks, the user, and the organizational context into consideration during the evaluation process, and provides computer support for the use of the evaluation procedure.  相似文献   

9.
Verbal protocols are the primary tool for understanding users' task-solving behaviors during usability testing. A qualitative study that examined the utility of combining a concurrent and retrospective think-aloud within the same usability test is described. The results indicate that although there was significant overlap between the types of utterances produced during each think-aloud, the retrospective phase produced more verbalizations that were relevant to usability analysis, for example, helpful self-assessments of performance, yielding insights into the impact of encountered difficulties. However, a small number of less desirable utterance types emerged: hypothesising, rationalizing, and forgetting. When used together, both methods contributed to an understanding of usability issues; the concurrent phase yielded more usability issues overall, and the retrospective data improved the understanding of these by (a) reinforcement: users highlighted the impact of an issue on their experience, (b) elaboration: users would provide causal explanations of encountered difficulties, and (c) context: users provided information about the product's context of use.  相似文献   

10.
Abstract

This paper describes two of the usability laboratories at Philips, discusses practical issues arising from our experience using the facilities, gives an example of a typical usability evaluation, and briefly outlines our vision for the future of the laboratories. Usability tests at Philips can involve any product from a portfolio ranging from Compact Disc Interactive (CD-I) to electron microscopes. Performing usability tests for consumer electronic products poses a number of specific problems: our user group is broad and diverse, the context in which our products are used is highly variable, and it is difficult to determine the importance of usability relative to other design goals. In the further development of our facilities, the efficient planning and the execution of usability test is of particular concern since we are driven by demanding time schedules. In the future, we expect a shift in focus towards testing more products and product concepts in their actual context of use.  相似文献   

11.

An evaluation method is proposed based on walkthrough analysis coupled with a taxonomic analysis of observed problems and causes of usability error. The model mismatch method identifies usability design flaws and missing requirements from user errors. The method is tested with a comparative evaluation of two information retrieval products. Different profiles of usability and requirements problems were found for the two products, even though their overall performance was similar.  相似文献   

12.
Abstract

Okay, so you've purchased a graphical user interface (GUI) builder tool to help you quickly build a sophisticated user interface, and your developers promise to follow a particular style guide (e.g., OSF/Motif, Apple/Macintosh) when creating the GUI. This is definitely a step in the right direction, but it is no guarantee that the application's user interface will be usable; that is, where the user interface helps, rather than hinders, the end-users from doing their jobs. Numerous techniques for testing the usability and user satisfaction of an application's GUI are available, such as design walk-throughs, field testing with beta releases, demonstrations of prototypes to future end-users, and user questionnaires. One of the most effective techniques is usability testing with defined tasks and metrics, and yet, it is not commonly used in project development life cycles at the National Aeronautics and Space Agency (NASA). This paper discusses a low-budget, but effective, approach we used at NASA's Goddard Space Flight Center (GSFC) to perform structured usability testing. It did not require any additional staff or a usability laboratory, but did successfully identify problems with the application's user interface. The purpose of the usability testing was two-fold: (1) to test the process used in the usability test; and (2) to apply the results of the test to improving the subject software's user interface. This paper will discuss the results from the test and the lessons learned. It will conclude with a discussion of future plans to conduct cost benefit analysis and integrate usability testing as a required step in a project's development life cycle.  相似文献   

13.
Designing easy to use mobile applications is a difficult task. In order to optimize the development of a usable mobile application, it is necessary to consider the mobile usage context for the design and the evaluation of the user-system interaction of a mobile application. In our research we designed a method that aligns the inspection method “Software ArchitecTure analysis of Usability Requirements realizatioN” SATURN and a mobile usability evaluation in the form of a user test. We propose to use mobile context factors and thus requirements as a common basis for both inspection and user test. After conducting both analysis and user test, the results described as usability problems are mapped and discussed. The mobile context factors identified define and describe the usage context of a mobile application. We exemplify and apply our approach in a case study. This allows us to show how our method can be used to identify more usability problems than with each method separately. Additionally, we could confirm the validity and identified the severity of usability problems found by both methods. Our work presents how a combination of both methods allows to address usability issues in a more holistic way. We argue that the increased quantity and quality of results can lead to a reduction of the number of iterations required in early stages of an iterative software development process.  相似文献   

14.
任英丽  吴诗瑾 《图学学报》2021,42(2):325-331
为提高多关节主被动训练仪的可用性水平,以用户体验理论为基础构建了以用户为触点的多关节主被动训练仪可用性评价指标体系,运用模糊层次分析法(FAHP)和隶属度定量化处理的方法构建多关节主被动训练仪可用性评价模型.以某公司生产的一款多关节主被动训练仪为例,先由专家测评确定评价指标并计算出各指标因素的权重值,再由评分小组操作该...  相似文献   

15.
For different levels of user performance, different types of information are processed and users will make different types of errors. Based on the error's immediate cause and the information being processed, usability problems can be classified into three categories. They are usability problems associated with skill-based, rule-based and knowledge-based levels of performance. In this paper, a user interface for a Web-based software program was evaluated with two usability evaluation methods, user testing and heuristic evaluation. The experiment discovered that the heuristic evaluation with human factor experts is more effective than user testing in identifying usability problems associated with skill-based and rule-based levels of performance. User testing is more effective than heuristic evaluation in finding usability problems associated with the knowledge-based level of performance. The practical application of this research is also discussed in the paper.  相似文献   

16.
The number of usability problems discovered in a user trial or identified in a heuristic evaluation can never be claimed to be exhaustive. This raises the question of how many usability problems remained undetected. In ergonomics/human factors research this subject matter is often addressed by asking how many participants are sufficient to discover a specific proportion of the usability problems. Current approaches to answer this question suffer from various biasing mechanisms, which undermine the credibility of the popular ‘rule of thumb’ that five participants are sufficient for the discovery of 80% of ‘all’ usability problems. This 5-user rule appears to be speculative in its application as a stop rule. In this paper, I compare actual estimates of the number of usability problems. Underestimation surfaces as a permanent threat. The so-called Turing estimate (CT) appears to be the most satisfactory. However, also CT estimates may suffer from underestimation. Therefore max(CT,CF) with the CF estimate based on partitioned frequencies is proposed as the most adequate estimate of the number of usability problems in the studies presented.  相似文献   

17.
To motivate visitors to engage with websites, e-tailers widely employ monetary rewards (e.g., vouchers, discounts) in their website designs. With advances in user interface technologies, many e-tailers have started to offer gamified monetary reward designs (MRDs), which require visitors to earn the monetary reward by playing a game, rather than simply claiming the reward. However, little is known about whether and why gamified MRDs engage visitors compared to their non-gamified counterpart. Even less is known about the effectiveness of gamified MRDs when providing certain or chance-based rewards, in that visitors do or do not know what reward they will gain for successfully performing in the game. Drawing on cognitive evaluation theory, we investigate gamified MRDs with certain or chance-based rewards and contrast them to non-gamified MRDs with certain rewards in user registration systems. Our results from a multi-method approach encompassing the complementary features of a randomised field experiment (N = 651) and a randomised online experiment (N = 330) demonstrate differential effects of the three investigated MRDs on user registration. Visitors encountering either type of gamified MRD are more likely to register than those encountering a non-gamified MRD. Moreover, gamified MRDs with chance-based rewards have the highest likelihood of user registrations. We also show that MRDs have distinct indirect effects on user registration via anticipated experiences of competence and sensation. Overall, the paper offers theoretical insights and practical guidance on how and why gamified MRDs are effective for e-tailers.  相似文献   

18.
Abstract

This paper reports a method for measuring usability in terms of task performance-achievement of frequent and critical task goals by particular users in a context simulating the work environment. The terms usability and quality in use are defined in international standards as the effectiveness, efficiency and satisfaction with which goals are achieved in a specific context of use. The performance measurement method gives measures which, in combination with measures of satisfaction, operationalize these definitions. User performance is specified and assessed by measures including task effectiveness (the quantity and quality of task performance) and User efficiency (effectiveness divided by tasktime). Measures are obtained with users performing tasks in a context of evaluation which matches the intended context of use. This can also reveal usability problems which may not become evident if the evaluator interacts with the user. The method is supported by tools which make it practical in commercial timescales. The method has been widely applied in industry, and can be adapted for use early in design, and to evaluate non-computer products and the performance of small work groups.  相似文献   

19.
Abstract

This article provides a table with summary statistics for the thirteen usability laboratories described in the papers in this special issue. It also gives an introduction to the main uses of usability laboratories in usability engineering and surveys some of the issues related to practical use of user testing and CAUSE tools for computer-aided usability engineering.  相似文献   

20.

In recent years, smartphone devices are becoming progressively popular across a diverse range of users. However, user diversity creates challenges in smartphone application (app) development. The diversity of users is often ignored by designers and developers due to the absence of requirements. Owing to this, many smartphone users face usability issues. Despite that, no dedicated platform found that guide smartphone app designers and developers regarding human universality. The aim of this research is to explore the requirements of diverse users in smartphone apps and provide usability guidelines. The objectives of this research are achieved by following two scientific approaches. The human diversity requirements are located by conducting usability tests that investigated the requirements in the form of usability issues. The systematic literature review (SLR) process is followed in order to resolve the discovered usability issues. Both approaches resulted in a list of usability issues and guidelines. The usability tests returned 27 problems while the SLR came with a comprehensive set of universal usability guidelines that were grouped into eleven categories. The study concluded with some major outcomes. The results show evidence of critical usability problems that must be addressed during the design and development of smartphone apps. Moreover, the study also revealed that people with disabilities were three times severely affected by usability problems in such apps than people of different ages and their needs must be considered a top priority in the development of smartphone apps.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号