首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 530 毫秒
1.
. We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.  相似文献   

2.
Abstract

Okay, so you've purchased a graphical user interface (GUI) builder tool to help you quickly build a sophisticated user interface, and your developers promise to follow a particular style guide (e.g., OSF/Motif, Apple/Macintosh) when creating the GUI. This is definitely a step in the right direction, but it is no guarantee that the application's user interface will be usable; that is, where the user interface helps, rather than hinders, the end-users from doing their jobs. Numerous techniques for testing the usability and user satisfaction of an application's GUI are available, such as design walk-throughs, field testing with beta releases, demonstrations of prototypes to future end-users, and user questionnaires. One of the most effective techniques is usability testing with defined tasks and metrics, and yet, it is not commonly used in project development life cycles at the National Aeronautics and Space Agency (NASA). This paper discusses a low-budget, but effective, approach we used at NASA's Goddard Space Flight Center (GSFC) to perform structured usability testing. It did not require any additional staff or a usability laboratory, but did successfully identify problems with the application's user interface. The purpose of the usability testing was two-fold: (1) to test the process used in the usability test; and (2) to apply the results of the test to improving the subject software's user interface. This paper will discuss the results from the test and the lessons learned. It will conclude with a discussion of future plans to conduct cost benefit analysis and integrate usability testing as a required step in a project's development life cycle.  相似文献   

3.

An evaluation method is proposed based on walkthrough analysis coupled with a taxonomic analysis of observed problems and causes of usability error. The model mismatch method identifies usability design flaws and missing requirements from user errors. The method is tested with a comparative evaluation of two information retrieval products. Different profiles of usability and requirements problems were found for the two products, even though their overall performance was similar.  相似文献   

4.
Abstract

This article defines a quantitative goal that is cheap to measure for the usability of a business application system for casual users. The article also describes a cost-effective method for attaining the goal. The goal is to eliminate all user interface disasters (UIDs) in a given system. UIDs are usability problems that seriously annoy users, or prevent them from accomplishing their work without help from a human being. The method consists of series of simple user tests without audio or video recording, and with little analysis after each user test. The article concludes by describing Baltica's results of applying the method to a medium-size business application for casual users.  相似文献   

5.

A walkthrough method for evaluating virtual reality (VR) user interfaces is described and illustrated with a usability assessment of a virtual business park application. The method is based on a theory of interaction that extends Norman's model of action. A walkthrough analysis method uses three models derived from the theory. The first model describes goal-oriented task action, the second exploration and navigation in virtual worlds, while the third covers interaction in response to system initiative. Each stage of the model is associated with generic design properties that specify the necessary support from the system for successful interaction. The evaluation method consists of a checklist of questions using the properties and following the model cycle. Use of the method uncovered several usability problems. Approaches to evaluation of VR applications and future work are discussed.  相似文献   

6.
Abstract

Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.  相似文献   

7.
Abstract

As a result of the importance of the usability approach in system development and the EC's ‘Directive concerning the minimum safety and health requirements for VDT workers’ (EWG 1990), there is an accepted need for practical evaluation methods for user interfaces. The usability approach and the EC Directive are not restricted to user interface design, as they include the design of appropriate hardware and software, as well as organization, job, and task design. Therefore system designers are faced with many, often conflicting, requirements and need to address the question, ‘How can usability requirements comprehensively be considered and evaluated in system development?’ Customers buying hardware and software and introducing them into their organization ask, (How can I select easy-to-use hardware and software?’ Both designers and customers need an evaluation procedure that covers all the organizational, user, hard- and software requirements. The evaluation method, EVADIS.II, we present in this paper overcomes characteristic deficiencies of previous evaluation methods. In particular, it takes the tasks, the user, and the organizational context into consideration during the evaluation process, and provides computer support for the use of the evaluation procedure.  相似文献   

8.

When designing a usability evaluation, choices must be made regarding methods and techniques for data collection and analysis. Mobile guides raise new concerns and challenges to established usability evaluation approaches. Not only are they typically closely related to objects and activities in the user's immediate surroundings, they are often used while the user is ambulating. This paper presents results from an extensive, multi-method evaluation of a mobile guide designed to support the use of public transport in Melbourne, Australia. In evaluating the guide, we applied four different techniques; field-evaluation, laboratory evaluation, heuristic walkthrough and rapid reflection. This paper describes these four approaches and their respective outcomes, and discusses their relative strengths and weaknesses for evaluating the usability of mobile guides.  相似文献   

9.

Much attention has been paid to the question of how many subjects are needed in usability research. Virzi (1992) modelled the accumulation of usability problems with increasing numbers of subjects and claimed that five subjects are sufficient to find most problems. The current paper argues that this answer is based on an important assumption, namely that all types of users have the same probability of encountering all usability problems. If this homogeneity assumption is violated, then more subjects are needed. A modified version of Virzi's model demonstrates that the number of subjects required increases with the number of heterogeneous groups. The model also shows that the more distinctive the groups, the more subjects will be required. This paper will argue that the simple answer 'five' cannot be applied in all circumstances. It most readily applies when the probability that a user will encounter a problem is both high and similar for all users. It also only applies to simple usability tests that seek to detect the presence, but not the statistical prevalence, of usability problems.  相似文献   

10.
Through the rapid spread of smartphones, users have access to many types of applications similar to those on desktop computer systems. Smartphone applications using augmented reality (AR) technology make use of users' location information. As AR applications will require new evaluation methods, improved usability and user convenience should be developed. The purpose of the current study is to develop usability principles for the development and evaluation of smartphone applications using AR technology. We develop usability principles for smartphone AR applications by analyzing existing research about heuristic evaluation methods, design principles for AR systems, guidelines for handheld mobile device interfaces, and usability principles for the tangible user interface. We conducted a heuristic evaluation for three popularly used smartphone AR applications to identify usability problems. We suggested new design guidelines to solve the identified problems. Then, we developed an improved AR application prototype of an Android-based smartphone, which later was conducted a usability testing to validate the effects of usability principles.  相似文献   

11.
User-interface design lacks expertise in designing nonvisual user interfaces. This is surprising as there are various domains where auditory interfaces have already been proved to be helpful, such as railway information services and reading support for blind persons.

We present a case study concerning the design of a telephone-based interface (TBI). It was realized within the development process of an interaction concept for a modular home automation system. The design was based on requirements gathered in user focus groups and on general guidelines for the design of TBIs. The TBI's evaluation revealed some minor (i.e., easily solved) usability problems. Questionnaires showed a positive ergonomic quality as well as a positive overall appeal. Interestingly, the evaluation indicates a potential to improve hedonic quality (i.e., non-task-related quality aspects). It may be induced by the addition of nonspeech sounds, thereby enriching user experience.  相似文献   

12.
The usability movement has historically always sought to empower end-users of computers so that they understand what is happening and can control the outcome. In this article, we develop and evaluate a “Textual Feedback” tool for usability and user experience (UX) evaluation that can be used to empower well-educated but low-status users in UX evaluations in countries and contexts with high power distances. The proposed tool contributes to the Human–Computer Interaction (HCI) community’s pool of localized UX evaluation tools. We evaluate the tool with 40 users from two socio-economic groups in real-life UX usability evaluations settings in Malaysia. The results indicate that the Textual Feedback tool may help participants to give their thoughts in UX evaluation in high power distance contexts. In particular, the Textual Feedback tool helps high status females and low status males express more UX problems than they can with traditional concurrent think aloud (CTA) alone. We found that classic concurrent think aloud UX evaluation works fine in high power contexts, but only with the addition of Textual Feedback to mitigate the effects of socio-economic status in certain user groups. We suggest that future research on UX evaluation look more into how to empower certain user groups, such as low status female users, in UX evaluations done in high power distance contexts.  相似文献   

13.
Considering the importance associated with e-commerce website accessibility and usability, a study on one of the most relevant Portuguese e-commerce websites has been performed using both automatic and manual assessment procedures. In an initial stage, we evaluated the chosen website with a Web accessibility and usability automatic tool called SortSite; after that, we performed a manual evaluation to verify each previously detected error and present possible solutions to overcome those faults. In a third phase, three usability specialists have been used to perform a heuristic evaluation of the chosen website. Finally, user tests with blind people were carried out in order to fully assess the compliance with accessibility and usability guidelines and standards. The results showed that the platform had a good score regarding the automatic evaluation; however, when the heuristic and manual evaluations were performed, some accessibility and usability problems were discovered. Moreover, the user test results showed bad marks regarding efficiency, effectiveness, and satisfaction by the group of participants. As a conclusion, we highlight user interaction problems and propose seven recommendations focused on enhancing accessibility and usability of not only the evaluated e-commerce website, but also of other similar ones.  相似文献   

14.
ContextUsability is an important software quality attribute for APIs. Unfortunately, measuring it is not an easy task since many things like experienced evaluators, suitable test users, and a functional product are needed. This makes existing usability measurement methods difficult to use, especially for non-professionals.ObjectiveTo make API usability measurement easier, an automated and objective measurement method would be needed. This article proposes such a method. Since it would be impossible to find and integrate all possible factors that influence API usability in one step, the main goal is to prove the feasibility of the introduced approach, and to define an extensible framework so that additional factors can easily be defined and added later.MethodA literature review is conducted to find potential factors influencing API usability. From these factors, a selected few are investigated more closely with usability studies. The statistically evaluated results from these studies are used to define specific elements of the introduced framework. Further, the influence of the user as a critical factor for the framework’s feasibility is evaluated.ResultsThe API Concepts Framework is defined, with an extensible structure based on concepts that represent the user’s actions, measurable properties that define what influences the usability of these concepts, and learning effects that represent the influence of the user’s experience. A comparison of values calculated by the framework with user studies shows promising results.ConclusionIt is concluded that the introduced approach is feasible and provides useful results for evaluating API usability. The extensible framework easily allows to add new concepts and measurable properties in the future.  相似文献   

15.
Abstract

Usability evaluation is a key component of a user-centred design process. Access to a usability laboratory can greatly facilitate the process of empirically measuring user performance, but the mere presence of a usability laboratory does not assure usable products. Rather, the laboratory must be used within an evaluation process. The process described in this article has five phases: designing the evaluation, preparing to conduct the evaluation, conducting the evaluation, analysing the data, and reporting the results. Lessons learned by the authors while they practised this evaluation process with a variety of products are summarized for possible use by other usability organizations.  相似文献   

16.
The overall discovery rates, which are the ratios of sum of unique usability problems detected by all experiment participants against the number of usability problems existed in the evaluated systems, were investigated to find significant factors of usability evaluation through a meta-analytic approach with the n-corrected effect sizes newly defined in this study. Since many studies of usability evaluation have been conducted under specific contexts showing some mixed findings, usability practitioners need holistic and more generalized conclusions. Due to the limited applicability of the traditional meta-analysis to usability evaluation studies, a new meta-analytic approach was established and applied to 38 experiments that reported overall discovery rates of usability problems as a criterion measure. Through the meta-analytic approach with the n-corrected effect sizes, we successfully combined 38 experiments and found evaluator's expertise, report type, and interaction between usability evaluation method and time constraint as significant factors. We suggest that in order to increase overall discovery rates of usability problems, (a) free-style written reports are better than structured written reports; (b) when heuristic evaluation or cognitive walkthrough is used, the usability evaluation experiments should be conducted without time constraint, but when think-aloud is used, time constraint is not an important experimental condition; (c) usability practitioners do not need to be concerned about unit of evaluation, fidelity of evaluated systems, and task type; and (d) HCI experts are better than novice users or evaluators. Our conclusions can guide usability practitioners when determining evaluation contexts, and the meta-analytic approach of this study provides an alternative way to combine the empirical results of usability evaluation besides the traditional meta-analysis.  相似文献   

17.
The novice–expert ratio method (NEM) pinpoints user interface design problems by identifying the steps in a task that have a high ratio of novice to expert completion time. This study tested the construct validity of NEM's ratio measure against common alternatives. Data were collected from 337 participants who separately performed 10 word-completion tasks on a cellular phone interface. The logarithm, ratio, Cohen's d, and Hedges's ? measures had similar construct validity, but Hedges's ? provided the most accurate measure of effect size. All these measures correlated more strongly with self-reported interface usability and interface knowledge when applied to the number of actions required to complete a task than when applied to task completion time. A weighted average of both measures had the highest correlation. The relatively high correlation between self-reported interface usability and a weighted Hedges's ? measure as compared to the correlations found in the literature indicates the usefulness of the weighted Hedges's ? measure in identifying usability problems.  相似文献   

18.
ABSTRACT

The evaluator effect names the observation that usability evaluators in similar conditions identify substantially different sets of usability problems. Yet little is known about the factors involved in the evaluator effect. We present a study of 50 novice evaluators' usability tests and subsequent comparisons, in teams and individually, of the resulting usability problems. The same problems were analyzed independently by 10 human–computer interaction experts. The study shows an agreement between evaluators of about 40%, indicating a substantial evaluator effect. Team matching of problems following the individual matching appears to improve the agreement, and evaluators express greater satisfaction with the teams' matchings. The matchings of individuals, teams, and independent experts show evaluator effects of similar sizes; yet individuals, teams, and independent experts fundamentally disagree about which problems are similar. Previous claims in the literature about the evaluator effect are challenged by the large variability in the matching of usability problems; we identify matching as a key determinant of the evaluator effect. An alternative view of usability problems and evaluator agreement is proposed in which matching is seen as an activity that helps to make sense of usability problems and where the existence of a correct matching is not assumed.  相似文献   

19.
This paper describes MTi, a biometric method for user identification on multitouch displays. The method is based on features obtained only from the coordinates of the 5 touchpoints of one of the user's hands. This makes MTi applicable to all multitouch displays large enough to accommodate a human hand and detect 5 or more touchpoints without requiring additional hardware and regardless of the display's underlying sensing technology. MTi only requests that the user places his hand on the display with the fingers comfortably stretched apart. A dataset of 34 users was created on which our method reported 94.69% identification accuracy. The method's scalability was tested on a subset of the Bosphorus hand database (100 users, 94.33% identification accuracy) and a usability study was performed.  相似文献   

20.

A tool was developed for structured and detailed analysis of video data from user tests of interactive systems. It makes use of a table format for representing an interaction at multiple levels of abstraction. Interactions are segmented based on threshold times for pauses between actions. Usability problems are found using a list of observable indications for the occurrence of problems. The tool was evaluated by having two analysts apply it to three data sets from user tests on two different products. The segmentation technique proved to yield meaningful segments that helped in understanding the interaction. The interaction table was explicit enough to discuss in detail what had caused the differences in the analysts' lists of usability problems. The results suggested that the majority of differences were caused by unavoidable differences in interpretations of subjects' behaviour and that only minor improvements should be expected by refining the tool.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号