共查询到20条相似文献,搜索用时 15 毫秒
1.
《Behaviour & Information Technology》2012,31(4-5):188-202
Abstract We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research. 相似文献
2.
Ashok Sivaji Søren Feodor Nielsen Torkil Clemmensen 《International journal of human-computer interaction》2017,33(5):357-370
The usability movement has historically always sought to empower end-users of computers so that they understand what is happening and can control the outcome. In this article, we develop and evaluate a “Textual Feedback” tool for usability and user experience (UX) evaluation that can be used to empower well-educated but low-status users in UX evaluations in countries and contexts with high power distances. The proposed tool contributes to the Human–Computer Interaction (HCI) community’s pool of localized UX evaluation tools. We evaluate the tool with 40 users from two socio-economic groups in real-life UX usability evaluations settings in Malaysia. The results indicate that the Textual Feedback tool may help participants to give their thoughts in UX evaluation in high power distance contexts. In particular, the Textual Feedback tool helps high status females and low status males express more UX problems than they can with traditional concurrent think aloud (CTA) alone. We found that classic concurrent think aloud UX evaluation works fine in high power contexts, but only with the addition of Textual Feedback to mitigate the effects of socio-economic status in certain user groups. We suggest that future research on UX evaluation look more into how to empower certain user groups, such as low status female users, in UX evaluations done in high power distance contexts. 相似文献
3.
Through the rapid spread of smartphones, users have access to many types of applications similar to those on desktop computer systems. Smartphone applications using augmented reality (AR) technology make use of users' location information. As AR applications will require new evaluation methods, improved usability and user convenience should be developed. The purpose of the current study is to develop usability principles for the development and evaluation of smartphone applications using AR technology. We develop usability principles for smartphone AR applications by analyzing existing research about heuristic evaluation methods, design principles for AR systems, guidelines for handheld mobile device interfaces, and usability principles for the tangible user interface. We conducted a heuristic evaluation for three popularly used smartphone AR applications to identify usability problems. We suggested new design guidelines to solve the identified problems. Then, we developed an improved AR application prototype of an Android-based smartphone, which later was conducted a usability testing to validate the effects of usability principles. 相似文献
4.
Ramiro Gonçalves Tânia Rocha José Martins Frederico Branco Manuel Au-Yong-Oliveira 《Universal Access in the Information Society》2018,17(3):567-583
Considering the importance associated with e-commerce website accessibility and usability, a study on one of the most relevant Portuguese e-commerce websites has been performed using both automatic and manual assessment procedures. In an initial stage, we evaluated the chosen website with a Web accessibility and usability automatic tool called SortSite; after that, we performed a manual evaluation to verify each previously detected error and present possible solutions to overcome those faults. In a third phase, three usability specialists have been used to perform a heuristic evaluation of the chosen website. Finally, user tests with blind people were carried out in order to fully assess the compliance with accessibility and usability guidelines and standards. The results showed that the platform had a good score regarding the automatic evaluation; however, when the heuristic and manual evaluations were performed, some accessibility and usability problems were discovered. Moreover, the user test results showed bad marks regarding efficiency, effectiveness, and satisfaction by the group of participants. As a conclusion, we highlight user interaction problems and propose seven recommendations focused on enhancing accessibility and usability of not only the evaluated e-commerce website, but also of other similar ones. 相似文献
5.
Bettina Biel Author Vitae Volker Gruhn Author Vitae 《Journal of Systems and Software》2010,83(11):2031-2044
Designing easy to use mobile applications is a difficult task. In order to optimize the development of a usable mobile application, it is necessary to consider the mobile usage context for the design and the evaluation of the user-system interaction of a mobile application. In our research we designed a method that aligns the inspection method “Software ArchitecTure analysis of Usability Requirements realizatioN” SATURN and a mobile usability evaluation in the form of a user test. We propose to use mobile context factors and thus requirements as a common basis for both inspection and user test. After conducting both analysis and user test, the results described as usability problems are mapped and discussed. The mobile context factors identified define and describe the usage context of a mobile application. We exemplify and apply our approach in a case study. This allows us to show how our method can be used to identify more usability problems than with each method separately. Additionally, we could confirm the validity and identified the severity of usability problems found by both methods. Our work presents how a combination of both methods allows to address usability issues in a more holistic way. We argue that the increased quantity and quality of results can lead to a reduction of the number of iterations required in early stages of an iterative software development process. 相似文献
6.
For different levels of user performance, different types of information are processed and users will make different types of errors. Based on the error's immediate cause and the information being processed, usability problems can be classified into three categories. They are usability problems associated with skill-based, rule-based and knowledge-based levels of performance. In this paper, a user interface for a Web-based software program was evaluated with two usability evaluation methods, user testing and heuristic evaluation. The experiment discovered that the heuristic evaluation with human factor experts is more effective than user testing in identifying usability problems associated with skill-based and rule-based levels of performance. User testing is more effective than heuristic evaluation in finding usability problems associated with the knowledge-based level of performance. The practical application of this research is also discussed in the paper. 相似文献
7.
《Behaviour & Information Technology》2012,31(7):707-737
The importance of evaluating the usability of e-commerce websites is well recognised. User testing and heuristic evaluation methods are commonly used to evaluate the usability of such sites, but just how effective are these for identifying specific problems? This article describes an evaluation of these methods by comparing the number, severity and type of usability problems identified by each one. The cost of employing these methods is also considered. The findings highlight the number and severity level of 44 specific usability problem areas which were uniquely identified by either user testing or heuristic evaluation methods, common problems that were identified by both methods, and problems that were missed by each method. The results show that user testing uniquely identified major problems related to four specific areas and minor problems related to one area. Conversely, the heuristic evaluation uniquely identified minor problems in eight specific areas and major problems in three areas. 相似文献
8.
9.
Evidence shows that integrated development environments (IDEs) are too often functionality-oriented and difficult to use, learn, and master. This article describes challenges in the design of usable IDEs and in the evaluation of the usability of such tools. It also presents the results of three different empirical studies of IDE usability. Different methods are sequentially applied across the empirical studies in order to identify increasingly specific kinds of usability problems that developers face in their use of IDEs. The results of these studies suggest several problems in IDE user interfaces with the representation of functionalities and artifacts, such as reusable program components. We conclude by making recommendations for the design of IDE user interfaces with better affordances, which may ameliorate some of most serious usability problems and help to create more human-centric software development environments. 相似文献
10.
Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a knowledge base. We conducted a comprehensive comparison study to determine if our theory-based framework and tool could be effectively used to find important usability problems in an interface design, relative to two other established inspection methods (heuristic evaluation and cognitive walkthrough). Results showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures. We also discuss other potential advantages of the UPI over heuristic evaluation and cognitive walkthrough when applied in practice. Potential applications of this work include a cost-effective alternative or supplement to lab-based formative usability evaluation during any stage of development. 相似文献
11.
Today, the success of a software application strongly depends on the usability of its interface, so the evaluation of interfaces has become a crucial aspect of software engineering. It is recognized that automatic tools for graphical user interface evaluation may greatly reduce the costs of traditional activities performed during expert evaluation or user testing in order to estimate the success probability of an application. However, automatic methods need to be empirically validated in order to prove their effectiveness with respect to the attributes they are supposed to evaluate.In this work, we empirically validate a usability evaluation method conceived to assess consistency aspects of a GUI with no need to analyze the back-end. We demonstrate the validity of the approach by means of a comparative experimental study, where four web sites and a stand-alone interactive application are analyzed and the results compared to those of a human-based usability evaluation. The analysis of the results and the statistical correlation between the tool׳s rating and humans׳ average ratings show that the proposed methodology can indeed be a useful complement to standard techniques of usability evaluation. 相似文献
12.
《International journal of human-computer studies》2006,64(2):79-102
How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research. 相似文献
13.
Phishing is considered as one of the most serious threats for the Internet and e-commerce. Phishing attacks abuse trust with
the help of deceptive e-mails, fraudulent web sites and malware. In order to prevent phishing attacks some organizations have
implemented Internet browser toolbars for identifying deceptive activities. However, the levels of usability and user interfaces
are varying. Some of the toolbars have obvious usability problems, which can affect the performance of these toolbars ultimately.
For the sake of future improvement, usability evaluation is indispensable. We will discuss usability of five typical anti-phishing
toolbars: built-in phishing prevention in the Internet Explorer 7.0, Google toolbar, Netcraft Anti-phishing toolbar and SpoofGuard.
In addition, we included Internet Explorer plug-in we have developed, Anti-phishing IEPlug. Our hypothesis was that usability
of anti-phishing toolbars, and as a consequence also security of the toolbars, could be improved. Indeed, according to the
heuristic usability evaluation, a number of usability issues were found. In this article, we will describe the anti-phishing
toolbars, we will discuss anti-phishing toolbar usability evaluation approach and we will present our findings. Finally, we
will propose advices for improving usability of anti-phishing toolbars, including three key components of anti-phishing client
side applications (main user interface, critical warnings and the help system). For example, we found that in the main user
interface it is important to keep the user informed and organize settings accordingly to a proper usability design. In addition,
all the critical warnings an anti-phishing toolbar shows should be well designed. Furthermore, we found that the help system
should be built to assist users to learn about phishing prevention as well as how to identify fraud attempts by themselves.
One result of our research is also a classification of anti-phishing toolbar applications.
Linfeng Li is a student at the University of Tampere, Finland. Marko Helenius is Assistant Professor at the Department of
Computer Sciences, University of Tampere, Finland. 相似文献
14.
Evaluating teamwork support in tabletop groupware applications using collaboration usability analysis 总被引:1,自引:1,他引:0
Tabletop groupware systems have natural advantages for collaboration, but they present a challenge for application designers
because shared work and interaction progress in different ways than in desktop systems. As a result, tabletop systems still
have problems with usability. We have developed a usability evaluation technique, T-CUA, that focuses attention on teamwork
issues and that can help designers determine whether prototypes provide adequate support for the basic actions and interactions
that are fundamental to table-based collaboration. We compared T-CUA with expert review in a user study where 12 evaluators
assessed an early tabletop prototype using one of the two evaluation methods. The group using T-CUA found more teamwork problems
and found problems in more areas than those using expert review; in addition, participants found T-CUA to be effective and
easy to use. The success of T-CUA shows the benefits of using a set of activity primitives as the basis for discount usability
techniques. 相似文献
15.
16.
Darryn Lavery Gilbert Cockton Malcolm P. Atkinson 《Behaviour & Information Technology》1997,16(4):246-266
. Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations. 相似文献
17.
《International journal of human-computer interaction》2013,29(3):207-231
In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process. 相似文献
18.
19.
Alistair Sutcliffe Michele Ryan Ann Doubleday Mark Springett 《Behaviour & Information Technology》2000,19(1):43-55
An evaluation method is proposed based on walkthrough analysis coupled with a taxonomic analysis of observed problems and causes of usability error. The model mismatch method identifies usability design flaws and missing requirements from user errors. The method is tested with a comparative evaluation of two information retrieval products. Different profiles of usability and requirements problems were found for the two products, even though their overall performance was similar. 相似文献
20.
《Behaviour & Information Technology》2012,31(4-5):246-266
Abstract Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations. 相似文献