首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents the evaluation of eight published Universal Design Resources (UDRs) to measure how effectively they support typical design processes and design psychology. New heuristics and principles to evaluate the UDRs from the point of view of designers who were universal design novices were created. Established methodologies for heuristic evaluation were used with the new heuristics. The evaluators found numerous problems in seven of the eight UDRs, providing evidence in support of the hypothesis that the content UDRs does not facilitate the design process and is not commensurate with what is known about typical design psychology.  相似文献   

2.
The smartphone market is nowadays highly competitive. When buying a new device, users focus on visual esthetics, ergonomics, performance, and user experience, among others. Assessing usability issues allows improving these aspects. One popular method for detecting usability problems is heuristic evaluation, in which evaluators employ a set of usability heuristics as guide. Using proper heuristics is highly relevant. In this paper we present SMASH, a set of 12 usability heuristics for smartphones and mobile applications, developed iteratively. SMASH (previously named TMD: Usability heuristics for Touchscreen-based Mobile Devices) was experimentally validated. The results support its utility and effectiveness.  相似文献   

3.
Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function—i.e. a ratio of two polynomials—of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGS’s performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.  相似文献   

4.
Involvement of a domain expert in domain knowledge modeling, for its use in an automated decision support system, has been a subject of interest in the last few decades. The process of knowledge creation and its transformation for further reasoning requires verification and validation. In this article, we provide details of the first approaches to a process that uses the well-known CQ-based evaluation methodology to identify and evaluate the process of legal-knowledge creation using SBVR-SE and its transformation to OWL 2. Thereby, it increases the legal practitioner’s confidence in using the created and transformed knowledge within a decision support system for deriving inferences for an in-court argumentation.  相似文献   

5.
6.
K-anonymisation is an approach to protecting individuals from being identified from data.Good k-anonymisations should retain data utility and preserve privacy,but few methods have considered these two conflicting requirements together. In this paper,we extend our previous work on a clustering-based method for balancing data utility and privacy protection, and propose a set of heuristics to improve its effectiveness.We introduce new clustering criteria that treat utility and privacy on equal terms and propose sampling-based techniques to optimally set up its parameters.Extensive experiments show that the extended method achieves good accuracy in query answering and is able to prevent linking attacks effectively.  相似文献   

7.
Heuristic evaluation is one of the most widely-used methods for evaluating the usability of a software product. Proposed in 1990 by Nielsen and Molich, it consists in having a small group of evaluators performing a systematic revision of a system under a set of guiding principles known as usability heuristics. Although Nielsen’s 10 usability heuristics are used as the de facto standard in the process of heuristic evaluation, recent research has provided evidence not only for the need of custom domain specific heuristics, but also for the development of methodological processes to create such sets of heuristics. In this work we apply the PROMETHEUS methodology, recently proposed by the authors, to develop the VLEs heuristics: a novel set of usability heuristics for the domain of virtual learning environments. In addition to the development of these heuristics, our research serves as further empirical validation of PROMETHEUS. To validate our results we performed an heuristic evaluation using both VLEs and Nielsen’s heuristics. Our design explicitly controls the effect of evaluator variability by using a large number of evaluators. Indeed, for both sets of heuristics the evaluation was performed independently by 7 groups of 5 evaluators each. That is, there were 70 evaluators in total, 35 using VLEs and 35 using Nielsen’s heuristics. In addition, we perform rigorous statistical analyses to establish the validity of the novel VLEs heuristics. The results show that VLEs perform better than Nielsen’s heuristics, finding more problems, which are also more relevant to the domain, as well as satisfying other quantitative and qualitative criteria. Finally, in contrast to evaluators using Nielsen’s heuristics, evaluators using VLEs heuristics reported greater satisfaction regarding utility, clarity, ease of use, and need of additional elements.  相似文献   

8.
Process Aware Information Systems manage processes within organisations on the basis of business process models. These models can be created either from scratch or by reusing exiting reference process models.Particular types of reference models are configurable process models that are created by merging multiple models into a single one that can be customized to the needs of the business experts. Using those models presents two main challenges: their creation and their configuration.In this paper, we focus on the first challenge and propose a novel algorithm for merging process models into a configurable process model. The difference in our work is the pre-annotated process models with their business capabilities that report on what actions each process element achieves. Our algorithm generates configurable models that are also annotated with their capabilities that can be used to face the second challenge of these models: the configuration phase.We tested our algorithm using real-world process models to evaluate the required creation time and resulting compression rate after merging the input models. The results show that the models can be created in few milliseconds and achieving a compression rate of 50%. We further carried out interviews with domain experts to assess the usefulness and the level of maturity of this work. The results show the importance of the automation of process merging using a tool support that we proposed. However, further adaptation efforts are required to integrate this work in the working environments of the interviewed experts.  相似文献   

9.
The worth of completing parallel tasks is modeled using utility functions, which monotonically-decrease with time and represent the importance and urgency of a task. These functions define the utility earned by a task at the time of its completion. The performance of a computing system is measured as the total utility earned by all completed tasks over some interval of time (e.g., 24 h). We have designed, analyzed, and compared the performance of a set of heuristic techniques to maximize system performance when scheduling dynamically arriving parallel tasks onto a high performance computing (HPC) system that is oversubscribed and energy constrained. We consider six utility-aware heuristics and four existing heuristics for comparison. A new concept of temporary place-holders is compared with scheduling using permanent reservations. We also present a novel energy filtering technique that constrains the maximum energy-per-resource used by each task. We conducted a simulation study to evaluate the performance of these heuristics and techniques in multiple energy-constrained oversubscribed HPC environments. We conduct an experiment with a subset of the heuristics on a physical testbed system for one scheduling scenario. We demonstrate that our proposed utility-aware resource management heuristics are able to significantly outperform existing techniques.  相似文献   

10.
This article presents a novel approach for morphological analysis based on the concept of genetic algorithms (GAs). Morphological analysis is of critical importance in data mining and information retrieval systems because it leads to a more homogeneous representation of words. The system presented here makes minimal use of language specific information and is therefore more general than the rule-based techniques that have been proposed in literature. A number of heuristics are created and tested as evaluation functions; both general-purpose ones as well as heuristics specifically designed for the task, and decisions are made on the optimum models for the genetic operators suitable for the specific implementation. Finally the system addresses the problem of simultaneous processing of a great number of words without excessively increasing the execution time or deteriorating the segmentation quality of the final results. This is accomplished by the division of the individuals into sections, following the application of a group of masks, and the operation of the GA on these smaller sections instead of on the entire individual.  相似文献   

11.
12.
We present new methods for the automated learning of heuristics in knowledge lean applications and for finding heuristics that can be generalized to unlearned domains. These applications lack domain knowledge for credit assignment; hence, operators for composing new heuristics are generally model free, domain independent, and syntactic in nature. The operators we have used are genetics based; examples of which include mutation and cross over. Learning is based on a generate and test paradigm that maintains a pool of competing heuristics, tests them to a limited extent, creates new ones from those that perform well in the past, and prunes poor ones from the pool. We have studied three important issues in learning better heuristics: anomalies in performance evaluation; rational scheduling of limited computational resources in testing candidate heuristics in single objective as well as multiobjective learning; and finding heuristics that can be generalized to unlearned domains. We show experimental results in learning better heuristics for: process placement for distributed memory multicomputers, node decomposition in a branch and bound search, generation of test patterns in VLSI circuit testing, and VLSI cell placement and routing  相似文献   

13.
This study concerns itself with providing user support for a decision problem in logistic regression analysis: given a set of metric variables and one binary dependent variable, select the optimal subset of variables that can best predict this dependent variable. The problem requires an evaluation of competing models based on heuristic selection criteria such as goodness-of-fit and prediction accuracy. This paper documents the heuristics, formalizes the algorithms, and eventually presents an interactive decision support system to facilitate the selection of such an optimal model.This study adds to the sparsely studied domain of expert systems for social science researchers, and makes three contributions to the literature. First, the study formalizes a number of heuristics to arrive at optimal logistic regression models. Second, the study presents two computational algorithms that incorporate these formalized heuristics. Third, the paper documents an implementation of these algorithms through an interactive decision support system. The study concludes with a discussion on the risks of relying too heavily on the system and with future opportunities for research.  相似文献   

14.
. The work reported here attempts to address Human-Computer Interaction (HCI) design problems by the creation of support for the conceptualization of such problems during evaluation. This support takes the form of a planning aid intended to aid novice human factors practitioners(recently qualified graduates, for example) to evaluate interactive work systems. The planning aidprovides a structure for relating and recruiting techniques used in Human Factors (HF) evaluations. It incorporates relevant information for planning an evaluation (e.g., evaluation methods themselves), and offers advice in the form of heuristics about the use of the methods, their selection, and configuration. The output of the planning aid is an evaluation plan.This paper reports the development of the planning aid, and illustrates its application with a case study. Two assessments of the planning aid with novice HF practitioners are also presented and discussed.  相似文献   

15.
Establishing explicit mappings between features and their implementation elements in code is one of the critical factors to maintain and evolve software systems successfully. This is especially important when developers have to evolve program families, which have evolved from one single core system to similar but different systems to accommodate various requirements from customers. Many techniques and tools have emerged to assist developers in the feature mapping activity. However, existing techniques and tools for feature mapping are limited as they operate on a single program version individually. Additionally, existing approaches are limited to recover features on demand, that is, developers have to run the tools for each family member version individually. In this paper, we propose a cohesive suite of five mapping heuristics addressing those two limitations. These heuristics explore the evolution history of the family members in order to expand feature mappings in evolving program families. The expansion refers to the action of automatically generating the feature mappings for each family member version by systematically considering its previous change history. The mapping expansion starts from seed mappings and continually tracks the features of the program family, thus eliminating the need of on demand algorithms. Additionally, we present the MapHist tool that provides support to the application of the proposed heuristics. We evaluate the accuracy of our heuristics through two evolving program families from our industrial partners. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Heuristics for model checking Java programs   总被引:1,自引:0,他引:1  
Model checking of software programs has two goals – the verification of correct software and the discovery of errors in faulty software. Some techniques for dealing with the most crucial problem in model checking, the state space explosion problem, concentrate on the first of these goals. In this paper we present an array of heuristic model checking techniques for combating the state space explosion when searching for errors. Previous work on this topic has mostly focused on property-specific heuristics closely related to particular kinds of errors. We present structural heuristics that attempt to explore the structure (branching structure, thread interdependency structure, abstraction structure) of a program in a manner intended to expose errors efficiently. Experimental results show the utility of this class of heuristics. In contrast to these very general heuristics, we also present very lightweight techniques for introducing program-specific heuristic guidance.  相似文献   

17.
Current software environments used to support parallel processing on a cluster of workstations (COW) are not satisfactory, do not provide complete transparency and are not specifically designed for parallel processing. In particular, the establishment of a parallel processing environment and the initialisation of parallel processes suffer from poor performance. Each parallel process of an application is created sequentially and in many cases the logon operation must be completed before remote resources could be acquired. These operations are also performed manually. We present in this paper an original approach that addresses the problem of parallel process creation. The remote workstations are acquired completely transparently and dynamically, and parallel processes are created concurrently. To demonstrate the feasibility of this approach we show a system based on RHODOS (a client/server and microkernel based distributed operating system), specifically designed to improve the performance of process instantiation and therefore able to improve the overall execution performance of parallel programs, in particular parallel process creation. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
We consider the design of bandwidth-demanding broadcasting applications using overlays in environments characterized by hosts with limited and asymmetric bandwidth, and significant heterogeneity in upload bandwidth. Such environments are critical to consider to extend the applicability of overlay multicast to mainstream Internet environments where insufficient bandwidth exists to support all hosts, but have not received adequate attention from the research community. We leverage the multitree framework and design heuristics to enable it to consider host contribution and operate in bandwidth-scarce environments. Our extensions seek to simultaneously achieve good utilization of system resources, performance to hosts commensurate to their contributions, and consistent performance. We have implemented the system and conducted an Internet evaluation on PlanetLab using real traces from previous operational deployments of an overlay broadcasting system. Our results indicate for these traces, our heuristics can improve the performance of high contributors by 10-240% and facilitate equitable bandwidth distribution among hosts with similar contributions.  相似文献   

19.
潘立军  符卓 《计算机应用》2012,32(11):3042-3070
针对已有求解带硬时间窗车辆路径问题时插入启发式算法结构复杂、参数多、求解效率不高的缺点,提出了求解该问题的时差插入启发式算法。该算法引入时差的概念,将时差作为启发规则的评价指标。相比已有求解该问题的经典启发式算法,该算法有参数个数少、算法结构简单等特点。应用标准测试算例测试表明,所提算法的求解质量优于Solomon的插入启发式算法和Potvin的平行插入启发式算法。  相似文献   

20.
Research on heuristic evaluation in recent years has focused on improving its effectiveness and efficiency with respect to user testing. The aim of this paper is to refine a research agenda for comparing and contrasting evaluation methods. To reach this goal, a framework is presented to evaluate the effectiveness of different types of support for structured usability problem reporting. This paper reports on an empirical study of this framework that compares two sets of heuristics, Nielsen’s heuristics and the cognitive principles of Gerhardt-Powals, and two media of reporting a usability problem, i.e. either using a web tool or paper. The study found that there were no significant differences between any of the four groups in effectiveness, efficiency and inter-evaluator reliability. A more significant contribution of this research is that the framework used for the experiments proved successful and should be reusable by other researchers because of its thorough structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号