首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The use of e-learning technology is incontestably recognized as an important and integral part of the educational process. Considerable research studies are carried out in order to apprehend how effective and usable e-learning systems. In this paper, an empirical-based study is conducted to explore how lecturers interact with an e-learning environment based on a predefined task model describing low-level interactions. Client-side log data is collected from university lecturers from the Electrical and Computer Science departments. Subsequently, data analysis is conducted to infer the usability degree from the estimated usage metrics together with further exploratory analysis from user feedback via System Usability Scale. Experimental results reveal that the System Usability Scale score is not a sufficient measure to express the true acceptance and satisfaction level of lecturers for using the e-learning systems. The evaluation must be fulfilled in tandem with analyzing the usage metrics derived from interaction traces in a non-intrusive fashion. The proposed approach is a milestone towards usability evaluation to improve the acceptance and user experience for academic staff and students.  相似文献   

2.
Despite recent advances of electronic technologies in e-learning, a consolidated evaluation methodology for e-learning applications is not available. The evaluation of educational software must consider its usability and more in general its accessibility, as well as its didactic effectiveness. This work is a first step towards the definition of a methodology for evaluating e-learning applications. Specific usability attributes capturing the peculiar features of these applications are identified. A preliminary user study involving a group of e-students, observed during their interaction with an e-learning application in a real situation, is reported. Then, the proposal is put forward to adapt to the e-learning domain a methodology for systematic usability evaluation, called SUE. Specifically, evaluation patterns are proposed that are able to drive the evaluators in the analysis of an e-learning application.  相似文献   

3.
Today, the success of a software application strongly depends on the usability of its interface, so the evaluation of interfaces has become a crucial aspect of software engineering. It is recognized that automatic tools for graphical user interface evaluation may greatly reduce the costs of traditional activities performed during expert evaluation or user testing in order to estimate the success probability of an application. However, automatic methods need to be empirically validated in order to prove their effectiveness with respect to the attributes they are supposed to evaluate.In this work, we empirically validate a usability evaluation method conceived to assess consistency aspects of a GUI with no need to analyze the back-end. We demonstrate the validity of the approach by means of a comparative experimental study, where four web sites and a stand-alone interactive application are analyzed and the results compared to those of a human-based usability evaluation. The analysis of the results and the statistical correlation between the tool׳s rating and humans׳ average ratings show that the proposed methodology can indeed be a useful complement to standard techniques of usability evaluation.  相似文献   

4.
An Application Programming Interface (API) provides a programmatic interface to a software component that is often offered publicly and may be used by programmers who are not the API’s original designers. APIs play a key role in software reuse. By reusing high quality components and services, developers can increase their productivity and avoid costly defects. The usability of an API is a qualitative characteristic that evaluates how easy it is to use an API. Recent years have seen a considerable increase in research efforts aiming at evaluating the usability of APIs. An API usability evaluation can identify problem areas and provide recommendations for improving the API. In this systematic mapping study, we focus on 47 primary studies to identify the aim and the method of the API usability studies. We investigate which API usability factors are evaluated, at which phases of API development is the usability of API evaluated and what are the current limitations and open issues in API usability evaluation. We believe that the results of this literature review would be useful for both researchers and industry practitioners interested in investigating the usability of API and new API usability evaluation methods.  相似文献   

5.
In this paper, we present a usability study aiming at assessing a visual language-based tool for developing adaptive e-learning processes. The tool implements the adaptive self-consistent learning object SET (ASCLO-S) visual language, a special case of flow diagrams, to be used by instructional designers to define classes of learners through stereotypes and to specify the more suited adaptive learning process for each class of learners. The usability study is based on the combined use of two techniques: a questionnaire-based survey and an empirical analysis. The survey has been used to achieve feedbacks from the subjects’ point of view. In particular, it has been useful to capture the perceived usability of the subjects. The outcomes show that both the proposed visual notation and the system prototype are suitable for instructional designers with or without experience on the computer usage and on tools for defining e-learning processes. This result is further confirmed by the empirical analysis we carried out by analysing the correlation between the effort to develop adaptive e-learning processes and some measures suitable defined for those processes. Indeed, the empirical analysis revealed that the effort required to model e-learning processes is not influenced by the experience of the instructional designer with the use of e-learning tools, but it only depends on the size of the developed process.  相似文献   

6.
We present the results of a usability evaluation of a locally developed hypermedia information system aiming at conservation biologists and wildlife managers in Namibia. Developer and end user come from different ethnic backgrounds, as is common to software development in Namibia and many developing countries. To overcome both the cultural and the authoritarian gap between usability evaluator and user, the evaluation was held as a workshop with usability evaluators who shared the target users’ ethnic and social backgrounds. Different data collection methods were used and results as well as specific incidences recorded. Results suggest that it is difficult for Namibian computer users to evaluate functionality independently from content. Users displayed evidence of a passive search strategy and an expectation that structure is provided rather than self generated. The comparison of data collection methods suggests that questionnaires are inappropriate in Namibia because they do not elicit a truthful response from participants who tend to provide answers they think are “expected”. The paper concludes that usability goals and methods have to be determined and defined within the target users’ cultural context.  相似文献   

7.
Usability evaluation helps to determine whether interactive systems support users in their work tasks. However, knowledge about those tasks and, more generally, about the work-domain is difficult to bring to bear on the processes and outcome of usability evaluation. One way to include such work-domain knowledge might be Cooperative Usability Testing, an evaluation method that consists of (a) interaction phases, similar to classic usability testing, and (b) interpretation phases, where the test participant and the moderator discuss incidents and experiences from the interaction phases. We have studied whether such interpretation phases improve the relevance of usability evaluations in the development of work-domain specific systems. The study included two development cases. We conclude that the interpretation phases generate additional insight and redesign suggestions related to observed usability problems. Also, the interpretation phases generate a substantial proportion of new usability issues, thereby providing a richer evaluation output. Feedback from the developers of the evaluated systems indicates that the usability issues that are generated in the interpretation phases have substantial impact on the software development process. The benefits of the interpretation phases may be explained by the access these provide both to the test participants’ work-domain knowledge and to their experiences as users.  相似文献   

8.
In an experiment conducted to study the effects of product expectations on subjective usability ratings, participants (N = 36) read a positive or a negative product review for a novel mobile device before a usability test, while the control group read nothing. In the test, half of the users performed easy tasks, and the other half hard ones, with the device. A standard usability test procedure was utilized in which objective task performance measurements as well as subjective post-task and post-experiment usability questionnaires were deployed. The study revealed a surprisingly strong effect of positive expectations on subjective post-experiment ratings: the participants who had read the positive review gave the device significantly better post-experiment ratings than did the negative-prime and no-prime groups. This boosting effect of the positive prime held even in the hard task condition where the users failed in most of the tasks. This finding highlights the importance of understanding: (1) what kinds of product expectations participants bring with them to the test, (2) how well these expectations represent those of the intended user population, and (3) how the test situation itself influences and may bias these expectations.  相似文献   

9.
As a mobile phone has various advanced functionalities or features, usability issues are increasingly challenging. Due to the particular characteristics of a mobile phone, typical usability evaluation methods and heuristics, most of which are relevant to a software system, might not effectively be applied to a mobile phone. Another point to consider is that usability evaluation activities should help designers find usability problems easily and produce better design solutions. To support usability practitioners of the mobile phone industry, we propose a framework for evaluating the usability of a mobile phone, based on a multi-level, hierarchical model of usability factors, in an analytic way. The model was developed on the basis of a set of collected usability problems and our previous study on a conceptual framework for identifying usability impact factors. It has multi-abstraction levels, each of which considers the usability of a mobile phone from a particular perspective. As there are goal-means relationships between adjacent levels, a range of usability issues can be interpreted in a holistic as well as diagnostic way. Another advantage is that it supports two different types of evaluation approaches: task-based and interface-based. To support both evaluation approaches, we developed four sets of checklists, each of which is concerned, respectively, with task-based evaluation and three different interface types: Logical User Interface (LUI), Physical User Interface (PUI) and Graphical User Interface (GUI). The proposed framework specifies an approach to quantifying usability so that several usability aspects are collectively measured to give a single score with the use of the checklists. A small case study was conducted in order to examine the applicability of the framework and to identify the aspects of the framework to be improved. It showed that it could be a useful tool for evaluating the usability of a mobile phone. Based on the case study, we improved the framework in order that usability practitioners can use it more easily and consistently.  相似文献   

10.
There has been little research on assessment of learning management systems (LMS) within educational organizations as both a web-based learning system for e-learning and as a supportive tool for blended learning environments. This study proposes a conceptual e-learning assessment model, hexagonal e-learning assessment model (HELAM) suggesting a multi-dimensional approach for LMS evaluation via six dimensions: (1) system quality, (2) service quality, (3) content quality, (4) learner perspective, (5) instructor attitudes, and (6) supportive issues. A survey instrument based on HELAM has been developed and applied to 84 learners. This sample consists of students at both undergraduate and graduate levels who are users of a web-based learning management system, U-Link, at Brunel University, UK. The survey instrument has been tested for content validity, reliability, and criterion-based predictive validity. The analytical results strongly support the appropriateness of the proposed model in evaluating LMSs through learners’ satisfaction. The explanatory factor analysis showed that each of the six dimensions of the proposed model had a significant effect on the learners’ perceived satisfaction. Findings of this research will be valuable for both academics and practitioners of e-learning systems.  相似文献   

11.
Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work on UEMs. The dogmas include using inadequate procedures and measures for assessment, focusing on win–lose outcomes, holding simplistic models of how usability evaluators work, concentrating on evaluation rather than on design and working from the assumption that usability problems are real. We discuss research approaches that may help move beyond the dogmas. In particular, we emphasise detailed studies of evaluation processes, assessments of the impact of UEMs on design carried out in real-world systems development and analyses of how UEMs may be combined.  相似文献   

12.
The aim of this paper was to design and assess a comprehensive model for managing the e-learning process and to define the relationship between systematic implementation of the model, outcomes of certain e-learning aspects and subject of e-learning. The validation of the model was performed by using two questionnaires sent via e-mail to teachers and field experts from the chosen sample of 14 European schools participating in an EU-funded project. Research results imply the existence of a clear link between planning and controlling of the e-learning process and its learning outcomes. On the other hand, no empirical relationship between the e-learning outcomes and the subject of learning has been established. It is believed that the model and its practical implications can be used by institutions engaged in e-learning, or as a process model for introducing e-learning related activities.  相似文献   

13.
Model-based software development is carried out as a well defined process. Depending on the applied approach, different phases can be distinguished, e.g. requirements specification, design, prototyping, implementation and usability evaluation. During this iterative process manifold artifacts are developed and modified, including, e.g. models, source code and usability evaluation data. CASE tools support the development stages well, but lack a seamless integration of usability evaluation methods. We aim at bridging the gap between development and usability evaluation, through enabling the cooperative use of artifacts with the particular tools. As a result of integration usability experts save time to prepare an evaluation and evaluation results can be easier incorporated back into the development process. We show exemplary our work on enhancing the Eclipse framework to support usability evaluation for task model-based software development.  相似文献   

14.
Today people increasingly expect more from the functionality of a web site, so the web usability has emerged as an important topic. It is generally hard to renew the existing web sites to meet the changing demands of users. Therefore, the present study is aimed at detailing the usability problems of web sites. To this end, Heuristic Evaluation (HE) is used to identify the usability problems, and Analytic Hierarchy Process (AHP) is used to rate their severity. Finally, for a more user-friendly web site, a new approach to judging severity of usability problems is developed by integrating AHP into HE.

Relevance to industry

There is an increasing importance for higher usability in the web development industry and communities. Different usability evaluation techniques have been developed and incorporated into the process of web site design and development. This study proposes a new approach to reveal usability problems on a web site and to define solution priority of these problems.  相似文献   

15.
In all parts of organisations there flourish developments of different new subsystems in areas of knowledge and learning. Over recent decades, new systems for classification of jobs have emerged both at the level of organisations and at a macro-labour market level. Recent developments in job evaluation systems make it possible to cope with the new demands for equity at work (between, for example, genders, races, physical abilities). Other systems have emerged to describe job requirements in terms of skills, knowledge and competence. Systems for learning at work and web-based learning have created a demand for new ways to classify and to understand the process of learning. Often these new systems have been taken from other areas of the organisation not directly concerned with facilitating workplace learning. All these new systems are of course closely interrelated but, in most organisations, a major problem is the severe lack of cohesion and compatibility between the different subsystems. The aim of this paper is to propose a basis for how different human resource systems can be integrated into the business development of an organisation. We discuss this problem and develop proposals alternative to integrated macro-systems. A key element in our proposition is a structure for classification of knowledge and skill to be used in all parts of the process. This structure should be used as an added dimension or an overlay on all other subsystems of the total process. This will facilitate a continued use of all existing systems within different organisations. We develop Burge's (personal communication) model for learning to show that learning is not a successive linear process, but rather an iterative process. In this way we emphasise the need for greater involvement of learners in the development of learning systems towards increased usability in a networked system. This paper is divided into two parts which are closely related. The first part gives an overview of the lack of compatibility between the different subsystems. In this first part we note two paradoxes which impact learning and for which we propose solutions. The second part deals with 'usability' aspects of these competency-related systems; in particular, usability in e-learning systems. In this second part we describe an example of a new organisational structure. We conclude by discussing four key concepts that are necessary conditions for organisations to address when developing their human capital. Establishing these conditions helps ensure compatibility and usability in e-learning systems.  相似文献   

16.
Web系统可用性自动化评估系统的设计与实现   总被引:1,自引:0,他引:1  
为了从用户真实行为中挖掘可能存在的可用性问题,避免启发式评估不可靠的问题,提出了基于大量用户行为数据,面向Web系统的可用性自动化评估系统.通过自动化匹配大量用户行为数据与需求分析阶段产生的用例,发现不一致的任务和任务前后置关系进而发现可用性问题.真实项目实践结果表明,输出的异常任务和异常任务前后置关系对可用性评估人员有指导价值,可以为进一步改进Web系统提供参考.  相似文献   

17.
18.
Tabletop groupware systems have natural advantages for collaboration, but they present a challenge for application designers because shared work and interaction progress in different ways than in desktop systems. As a result, tabletop systems still have problems with usability. We have developed a usability evaluation technique, T-CUA, that focuses attention on teamwork issues and that can help designers determine whether prototypes provide adequate support for the basic actions and interactions that are fundamental to table-based collaboration. We compared T-CUA with expert review in a user study where 12 evaluators assessed an early tabletop prototype using one of the two evaluation methods. The group using T-CUA found more teamwork problems and found problems in more areas than those using expert review; in addition, participants found T-CUA to be effective and easy to use. The success of T-CUA shows the benefits of using a set of activity primitives as the basis for discount usability techniques.  相似文献   

19.
In this paper we present eTeacher, an intelligent agent that provides personalized assistance to e-learning students. eTeacher observes a student’s behavior while he/she is taking online courses and automatically builds the student’s profile. This profile comprises the student’s learning style and information about the student’s performance, such as exercises done, topics studied, exam results. In our approach, a student’s learning style is automatically detected from the student’s actions in an e-learning system using Bayesian networks. Then, eTeacher uses the information contained in the student profile to proactively assist the student by suggesting him/her personalized courses of action that will help him/her during the learning process. eTeacher has been evaluated when assisting System Engineering students and the results obtained thus far are promising.  相似文献   

20.
There is an agreement that perceived usability is important beyond actual effectiveness of software systems. Perceived usability is often obtained by self-reports provided after system use. Aiming to improve summative usability testing, we propose a methodology to enhance in-depth testing of users' performance and perceived usability at the task level. The metacognitive research approach allows detailed analysis of cognitive processes. Adapting its methodologies, we propose the Metacognitive Usability Profile (MUP) which includes a comprehensive set of measures based on collecting confidence in the success of each particular task and triangulating it with objective measures. We demonstrate using the MUP by comparing two versions of a project management system. Based on a task analysis we allocated tasks that differ between the versions and let participants (N = 100) use both versions. Although no difference was found between the versions in system-level perceived usability, the detailed task-level analysis exposed many differences. In particular, overconfidence was associated with low performance, which suggests that user interfaces better avoid illusions of knowing. Overall, the study demonstrates how the MUP exposes challenges users face. This, in turn, allows choosing the better task implementation among the examined options and to focus attempts for usability improvement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号