首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Evaluating e-learning systems is a complex activity which requires considerations of several criteria addressing quality in use as well as educational quality. Heuristic evaluation is a widespread method for usability evaluation, yet its output is often prone to subjective variability, primarily due to the generality of many heuristics. This paper presents the pattern-based (PB) inspection, which aims at reducing this drawback by exploiting a set of evaluation patterns to systematically drive inspectors in their evaluation activities. The application of PB inspection to the evaluation of e-learning systems is reported in this paper together with a study that compares this method to heuristic evaluation and user testing. The study involved 73 novice evaluators and 25 end users, who evaluated an e-learning application using one of the three techniques. The comparison metric was defined along six major dimensions, covering concepts of classical test theory and pragmatic aspects of usability evaluation. The study showed that evaluation patterns, capitalizing on the reuse of expert evaluators know-how, provide a systematic framework which reduces reliance on individual skills, increases inter-rater reliability and output standardization, permits the discovery of a larger set of different problems and decreases evaluation cost. Results also indicated that evaluation in general is strongly dependent on the methodological apparatus as well as on judgement bias and individual preferences of evaluators, providing support to the conceptualisation of interactive quality as a subjective judgement, recently brought forward by the UX research agenda.  相似文献   

2.
We present the results of a usability evaluation of a locally developed hypermedia information system aiming at conservation biologists and wildlife managers in Namibia. Developer and end user come from different ethnic backgrounds, as is common to software development in Namibia and many developing countries. To overcome both the cultural and the authoritarian gap between usability evaluator and user, the evaluation was held as a workshop with usability evaluators who shared the target users’ ethnic and social backgrounds. Different data collection methods were used and results as well as specific incidences recorded. Results suggest that it is difficult for Namibian computer users to evaluate functionality independently from content. Users displayed evidence of a passive search strategy and an expectation that structure is provided rather than self generated. The comparison of data collection methods suggests that questionnaires are inappropriate in Namibia because they do not elicit a truthful response from participants who tend to provide answers they think are “expected”. The paper concludes that usability goals and methods have to be determined and defined within the target users’ cultural context.  相似文献   

3.
Many efforts to improve the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both of these approaches depend on a complete division of work between developers and evaluators. This article takes a different approach by exploring whether software developers and designers can be trained to conduct their own usability evaluations. The article is based on an empirical study where 36 teams with a total of 234 first-year university students on software development and design educations were trained through an introductory course in user-based website usability testing that was taught in 40 h. They used the techniques from this course for planning, conducting, and interpreting the results of a usability evaluation of an interactive website. They gained good competence in conducting the evaluation, defining user tasks and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems.  相似文献   

4.
5.
The evaluation of interactive systems has been an active subject of research for many years. Many methods have been proposed, but most of them do not take the architectural specificities of an agent-based interactive system into account, nor do they focus on the link between architecture and evaluation. In this paper, we present an agent-based architecture model for interactive systems. Then, based on this architecture, we propose a generic, reconfigurable evaluation environment, called EISEval, designed and developed to help evaluators analyze and evaluate certain aspects of interactive systems in general and of agent-based architecture interactive systems in particular: User Interface (UI), non-functional properties (e.g., response time, complexity) and user characteristics (e.g., abilities, preferences, progress). System designers can draw useful conclusions from the evaluation results to improve the system. This environment was applied to evaluate an agent-based interactive system used to supervise an urban transport network in a study organized in laboratory.  相似文献   

6.
Tabletop groupware systems have natural advantages for collaboration, but they present a challenge for application designers because shared work and interaction progress in different ways than in desktop systems. As a result, tabletop systems still have problems with usability. We have developed a usability evaluation technique, T-CUA, that focuses attention on teamwork issues and that can help designers determine whether prototypes provide adequate support for the basic actions and interactions that are fundamental to table-based collaboration. We compared T-CUA with expert review in a user study where 12 evaluators assessed an early tabletop prototype using one of the two evaluation methods. The group using T-CUA found more teamwork problems and found problems in more areas than those using expert review; in addition, participants found T-CUA to be effective and easy to use. The success of T-CUA shows the benefits of using a set of activity primitives as the basis for discount usability techniques.  相似文献   

7.
To support the transformation of system engineering from the project-based development of highly customer-specific solutions to the reuse and customization of ‘system products’, we integrate a process reference model for reuse- and product-oriented industrial engineering and a process reference model extending ISO/IEC 12207 on software life cycle processes with software- and system-level product management. We synthesize the key process elements of both models to enhance ISO/IEC 15288 on system life cycle processes with product- and reuse-oriented engineering and product management practices as an integrated framework for process assessment and improvement in contexts where systems are developed and evolved as products.  相似文献   

8.
Computer professionals have a need for robust, easy-to-use usability evaluation methods (UEMs) to help them systematically improve the usability of computer artifacts. However, cognitive walkthrough (CW), heuristic evaluation (HE), and thinking- aloud study (TA)-3 of the most widely used UEMs-suffer from a substantial evaluator effect in that multiple evaluators evaluating the same interface with the same UEM detect markedly different sets of problems. A review of 11 studies of these 3 UEMs reveals that the evaluator effect exists for both novice and experienced evaluators, for both cosmetic and severe problems, for both problem detection and severity assessment, and for evaluations of both simple and complex systems. The average agreement between any 2 evaluators who have evaluated the same system using the same UEM ranges from 5% to 65%, and no 1 of the 3 UEMs is consistently better than the others. Although evaluator effects of this magnitude may not be surprising for a UEM as informal as HE, it is certainly notable that a substantial evaluator effect persists for evaluators who apply the strict procedure of CW or observe users thinking out loud. Hence, it is highly questionable to use a TA with 1 evaluator as an authoritative statement about what problems an interface contains. Generally, the application of the UEMs is characterized by (a) vague goal analyses leading to variability in the task scenarios, (b) vague evaluation procedures leading to anchoring, or (c) vague problem criteria leading to anything being accepted as a usability problem, or all of these. The simplest way of coping with the evaluator effect, which cannot be completely eliminated, is to involve multiple evaluators in usability evaluations.  相似文献   

9.
The IEC 61499 standard has been developed to allow the modeling and design of distributed control systems, providing advanced concepts of software engineering (such as abstraction and encapsulation) to the world of control engineering. The introduction of this standard in already existing control environments poses challenges, since programs written using the widespread IEC 61131-3 programming standard cannot be directly executed in a fully IEC 61499 environment without reengineering effort. In order to solve this problem, this paper presents an architecture to integrate modules of the two standards, allowing the exploitation of the benefits of both. The proposed architecture is based on the coexistence of control software of the two standards. Modules written in one standard interact with some particular interfaces that encapsulate functionalities and information to be exchanged with the other standard. In particular, the architecture permits to utilize available run-times without modification, it allows the reuse of software modules, and it utilizes existing features of the standards. A methodology to integrate IEC 61131-3 modules in an IEC 61499 distributed solution based on such architecture is also developed, and it is described via a case study to prove feasibility and benefits. Experimental results demonstrate that the proposed solution does not add substantial load or delays to the system when compared to an IEC 61131-3 based solution. By acting on task period, it can achieve performances similar to an IEC 61499 solution.  相似文献   

10.
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.  相似文献   

11.

This study was conducted to compare CHE between Human-Computer Interaction (HCI) experts and novices in evaluating the Smartphone app for the cultural heritage site. It uses the Smartphone Mobile Application heuRisTics (SMART), focusing on smartphone applications and traditional Nielsen heuristics, focusing on a wider range of interactive systems. Six experts and six novices used the severity rating scale to categorise the severity of the usability issues. These issues were mapped to both heuristics. The study found that experts and novice evaluators identified 19 and 14 usability issues, respectively, with ten as the same usability issues. However, these same usability issues have been rated differently. Although the t-test indicates no significant differences between experts and novices in their ratings for usability issues, these results nevertheless indicate the need for both evaluators in CHE to provide a more comprehensive perspective on the severity of the usability issues. Furthermore, the mapping of the usability issues for Nielsen and SMART heuristics concluded that more issues with the smartphone app could be addressed through smartphone-specific heuristics than general heuristics, indicating a better tool for heuristic evaluation of the smartphone app. This study also provides new insight into the required number of evaluators needed for CHE.

  相似文献   

12.
Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work on UEMs. The dogmas include using inadequate procedures and measures for assessment, focusing on win–lose outcomes, holding simplistic models of how usability evaluators work, concentrating on evaluation rather than on design and working from the assumption that usability problems are real. We discuss research approaches that may help move beyond the dogmas. In particular, we emphasise detailed studies of evaluation processes, assessments of the impact of UEMs on design carried out in real-world systems development and analyses of how UEMs may be combined.  相似文献   

13.
Usability is a software attribute usually associated with the “ease of use and to learn” of a given interactive system. Nowadays usability evaluation is becoming an important part of software development, providing results based on quantitative and qualitative estimations. In this context, qualitative results are usually obtained through a Qualitative Usability Testing process which includes a number of different methods focused on analyzing the interface of a particular interactive system. These methods become complex when a large number of interactive systems belonging to the same context of use have to be jointly considered to provide a general diagnosis, as a considerable amount of information must be visualized and treated simultaneously. However, diagnosing the most general usability problems of a context of use as a whole from a qualitative viewpoint is a challenge for UE nowadays. Identifying such problems can help to evaluate a new interface belonging to this context, and to prevent usability errors when a novel interactive system is being developed. From a quantitative viewpoint, condensing results in singles scores, metrics or statistical functions is an acceptable solution for processing huge amounts of usability related information. Nevertheless, QUT processes need to keep their richness by prioritizing the “what” over the “how much/how many” questions related to the detection of usability problems.To cope with the above situation, this paper presents a new approach in which two datamining techniques (association rules and decision trees) are used to extend the existing Qualitative Usability Testing process in order to provide a general usability diagnosis of a given context of use from a qualitative viewpoint. In order to validate our proposal, usability problems patterns belonging to academic webpages in Spanish-speaking countries are assessed by processing 3450 records which store qualitative information collected by means of a Heuristic Evaluation.  相似文献   

14.
ContextThis paper is developed in the context of Usability Engineering. More specifically, it focuses on the use of modelling and simulation to help decision-making in the scope of usability evaluation.ObjectiveThe main goal of this paper is to present UESim: a System Dynamics simulation model to help decision-making in the make-up of the usability evaluation team during the process of usability evaluation.MethodTo develop this research we followed four main research phases: (a) study identification, (b) study development, (c) running and observation and finally, (d) reflexion. In relation with these phases the paper describes the literature revision, the model building and validation, the model simulation and its results and finally the reflexion on it.ResultsWe developed and validated a model to simulate the usability evaluation process. Through three different simulations we analysed the effects of different compositions of the evaluation team on the outcome of the evaluation. The simulation results show the utility of the model in the decision making of the usability evaluation process by changing the number and expertise of evaluators employed.ConclusionOne of the main advantages of using such a simulation model is that it allows developers to observe the evolution of the key indicators of the evaluation process over time. UESim represents a customisable tool to help decision-making in the management of the usability evaluation process, since it makes it possible to analyse how the key process indicators are affected by the main management options of the Usability Evaluation Process.  相似文献   

15.
Although various methods exist for performing usability evaluation, they lack a systematic framework for guiding and structuring the assessment and reporting activities. Consequently, analysis and reporting of usability data are ad hoc and do not live up to their potential in cost effectiveness, and usability engineering support tools are not well integrated. We developed the User Action Framework, a structured knowledge base of usability concepts and issues, as a framework on which to build a broad suite of usability engineering support tools. The User Action Framework helps to guide the development of each tool and to integrate the set of tools in the practitioner's working environment. An important characteristic of the User Action Framework is its own reliability in term of consistent use by practitioners. Consistent understanding and reporting of the underlying causes of usability problems are requirements for cost-effective analysis and redesign. Thus, high reliability in terms of agreement by users on what the User Action Framework means and how it is used is essential for its role as a common foundation for the tools. Here we describe how we achieved high reliability in the User Action Framework, and we support the claim with strongly positive results of a summative reliability study conducted to measure agreement among 10 usability experts in classifying 15 different usability problems. Reliability data from the User Action Framework are also compared to data collected from nine of the same usability experts using a classic heuristic evaluation technique.  相似文献   

16.
Heuristic evaluation is one of the most widely-used methods for evaluating the usability of a software product. Proposed in 1990 by Nielsen and Molich, it consists in having a small group of evaluators performing a systematic revision of a system under a set of guiding principles known as usability heuristics. Although Nielsen’s 10 usability heuristics are used as the de facto standard in the process of heuristic evaluation, recent research has provided evidence not only for the need of custom domain specific heuristics, but also for the development of methodological processes to create such sets of heuristics. In this work we apply the PROMETHEUS methodology, recently proposed by the authors, to develop the VLEs heuristics: a novel set of usability heuristics for the domain of virtual learning environments. In addition to the development of these heuristics, our research serves as further empirical validation of PROMETHEUS. To validate our results we performed an heuristic evaluation using both VLEs and Nielsen’s heuristics. Our design explicitly controls the effect of evaluator variability by using a large number of evaluators. Indeed, for both sets of heuristics the evaluation was performed independently by 7 groups of 5 evaluators each. That is, there were 70 evaluators in total, 35 using VLEs and 35 using Nielsen’s heuristics. In addition, we perform rigorous statistical analyses to establish the validity of the novel VLEs heuristics. The results show that VLEs perform better than Nielsen’s heuristics, finding more problems, which are also more relevant to the domain, as well as satisfying other quantitative and qualitative criteria. Finally, in contrast to evaluators using Nielsen’s heuristics, evaluators using VLEs heuristics reported greater satisfaction regarding utility, clarity, ease of use, and need of additional elements.  相似文献   

17.
The selection and customization of usability evaluation methods, given the peculiarities of their application domains, still remains a critical issue; this especially when dealing with complex products and/or nonexpert usability evaluators. Moreover, as time goes by, the quality of the evaluation results has a heavier impact on the product design process. Starting from classic usability evaluation methods, the research described in this article generates multimethods semiautomatically. It allows quantitative characterization of these multimethods before their application in the field and exploits the comparison between this prior assessment and a final estimate, made after adoption, to update the information used by the method selection process. The most critical issue related to usability, subjectivity, is considered and dealt with throughout the entire research. A case study, done at the end of the development phase, helps validate the proposed approach to usability evaluation.  相似文献   

18.
Abstract: Evaluation is crucial for improving expert system design and performance. This paper stresses the need for considering system evaluation throughout the development process. It highlights the importance of evaluating system usability and discusses key usability issues. A number of basic evaluation methods are described, including interviews, questionnaires, observation, system logging, user diaries, laboratory experiments and field trials. Finally, the paper looks at evaluating systems within organisations, and assessing other long term effects of expert systems.  相似文献   

19.
This article describes recent research in subjective usability measurement at IBM, focused on evaluating the psychometric properties of questionnaires designed for use in scenario‐based usability evaluation. The questionnaires address evaluation at both a global overall system level and at a more detailed scenario level. The primary goals of this article are to (a) discuss the psychometric characteristics of IBM questionnaires that measure user satisfaction with computer system usability, and (b) provide the questionnaires, with administration and scoring instructions. For scenario‐level measurement, the 3‐item After‐Scenario Questionnaire (ASQ) has excellent internal consistency, with coefficient alphas across a set of scenarios ranging from .90 to .96. For more global assessment, the Post‐Study System Usability Questionnaire (PSSUQ) also has excellent internal consistency, with an overall coefficient alpha of .97. Preliminary principal factor analysis of 48 PSSUQ questionnaires suggested the presence of three factors named, after varimax rotation, System Usefulness, Information Quality, and Interface Quality, with corresponding coefficient alphas of .96, .91, and .91. Evaluation of 377 PSSUQ questionnaires (modified to allow mailing to respondents in their offices and referred to as the Computer System Usability Questionnaire, or CSUQ) confirmed the structure of the preliminary principal factor analysis. Consequently, usability practitioners can use these questionnaires to help them measure users’ satisfaction with the usability of computer systems in the context of scenario‐based usability studies.  相似文献   

20.
Whereas research in usability evaluation abounds, few evaluation approaches focus on utility. We present the utility inspection method (UIM), which prompts evaluators about the utility of the system they evaluate. The UIM asks whether a system uses global platforms, provides support infrastructure, is robust, gives access to rich content, allows customisation, offers symbolic value and supports companionship among users and between users and developers. We compare 47 participants’ use of UIM and heuristic evaluation (HE). The UIM helps identify more than three times as many problems as HE about the context of activities; HE helps identify 2.5 times as many problems as UIM about the interface. Usability experts consider the problems found with UIM more severe and more complex to solve compared to those found with HE. We argue that UIM complements existing usability evaluation methods and discuss future research on utility inspection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号