首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
《Ergonomics》2012,55(11):1840-1865
After more than 70 years of research it is still not clear under what conditions graphic presentations of information have an advantage over tables. A minimum assumption Visual Search Model (VSM) was designed to predict the performance of various tasks with tables and graphs that show data with different levels of complexity. An experiment tested the performance of five tasks with tables, bargraphs and line-graphs, showing data with various levels of complexity, over the course of nine experimental sessions in order to assess possible changes in the relative efficiency of the displays after practice. Tables had an initial advantage over graphs for all tasks, and there were complex interactions between the variables. The initial differences between the displays disappeared for some tasks after users gained experience with the displays, while for other tasks the differences continued to exist even after extended practice. The VSM predicted the results for tables well. For graphs the model was adequate for tasks that involve single data points, such as reading values or comparing pairs of values. The performance of tasks that require the analysis of data configurations, such as reading a trend, could not be predicted with the VSM. Hence the VSM can predict task performance with tables and graphs for low-integration tasks.  相似文献   

2.
Meyer J 《Ergonomics》2000,43(11):1840-1865
After more than 70 years of research it is still not clear under what conditions graphic presentations of information have an advantage over tables. A minimum assumption Visual Search Model (VSM) was designed to predict the performance of various tasks with tables and graphs that show data with different levels of complexity. An experiment tested the performance of five tasks with tables, bargraphs and line-graphs, showing data with various levels of complexity, over the course of nine experimental sessions in order to assess possible changes in the relative efficiency of the displays after practice. Tables had an initial advantage over graphs for all tasks, and there were complex interactions between the variables. The initial differences between the displays disappeared for some tasks after users gained experience with the displays, while for other tasks the differences continued to exist even after extended practice. The VSM predicted the results for tables well. For graphs the model was adequate for tasks that involve single data points, such as reading values or comparing pairs of values. The performance of tasks that require the analysis of data configurations, such as reading a trend, could not be predicted with the VSM. Hence the VSM can predict task performance with tables and graphs for low-integration tasks.  相似文献   

3.
《Ergonomics》2012,55(1):331-335
Psychophysical theory and its use in establishing maximum acceptable workloads are briefly reviewed. A distinction is made between psychophysical criteria for static strength, and psychophysical criteria for dynamic strength. The experimental protocol and equipment developed during the series of Liberty Mutual studies is described. These studies used industrial workers in a controlled environment to develop maximum acceptable weights and forces for the basic manual handling tasks. Two recent experiments investigating task frequency and object size are also described. It is concluded that minor changes should be made in the tables of maximum acceptable weights and forces published by Snook [576].  相似文献   

4.
In two experiments participants had to detect changes in periodic sinusoidal functions, displayed in either graphic or tabular displays. Graphs had a major advantage over tables when the task required considering configurations of data. Both displays led to similar results when task performance could rely on inspecting individual data points. With graphs almost all participants reported using the optimal method for detecting changes in the function, i.e., they used the method requiring the least effort to perform the task. With tables only about half used the optimal detection method, and these participants showed transfer of learning of detection methods between tasks. Experience in using a detection method led to improved performance if the new task relied on the same method of detection. These findings demonstrate the need to consider task performance methods when determining the relative value of different displays. The set of tasks for which a display is used is likely to affect performance and needs to be analysed as a whole, since methods employed for one task can affect task performance in other tasks.  相似文献   

5.
针对数字逻辑实验教学存在的问题,提出数字逻辑课程实验教学改革和实验课题设计的思想,改传统的验证性实验为用VHDL语言做设计性实验;介绍实验课题的设计和教学效果。实验课题应该是课程的主要知识点,有合适的难度,无现成的答案。实验课题包括两个组合逻辑设计和3个时序逻辑设计课题,有基本实验,还有选做题目。教学效果表明实验课题的设计是成功的,达到了预期的目的。  相似文献   

6.
This paper reports results from a controlled experiment (N = 50) measuring effects of interruption on task completion time, error rate, annoyance, and anxiety. The experiment used a sample of primary and peripheral tasks representative of those often performed by users. Our experiment differs from prior interruption experiments because it measures effects of interrupting a user’s tasks along both performance and affective dimensions and controls for task workload by manipulating only the time at which peripheral tasks were displayed – between vs. during the execution of primary tasks. Results show that when peripheral tasks interrupt the execution of primary tasks, users require from 3% to 27% more time to complete the tasks, commit twice the number of errors across tasks, experience from 31% to 106% more annoyance, and experience twice the increase in anxiety than when those same peripheral tasks are presented at the boundary between primary tasks. An important implication of our work is that attention-aware systems could mitigate effects of interruption by deferring presentation of peripheral information until coarse boundaries are reached during task execution. As our results show, deferring presentation for a short time, i.e. just a few seconds, can lead to a large mitigation of disruption.  相似文献   

7.
A table is a well-organized and summarized knowledge expression for a domain. Therefore, it is of great importance to extract information from tables. However, many tables in Web pages are used not to transfer information but to decorate pages. One of the most critical tasks in Web table mining is thus to discriminate meaningful tables from decorative ones. The main obstacle of this task comes from the difficulty of generating relevant features for discrimination. This paper proposes a novel discrimination method using a composite kernel which combines parse tree kernels and a linear kernel. Because a Web table is represented as a parse tree by an HTML parser, it is natural to represent the structural information of a table as a parse tree. In this paper, two types of parse trees are used to represent structural information within and around a table. These two trees define the structure kernel that handles the structural information of tables. The contents of a Web table are manipulated by a linear kernel with content features. Support vector machines with the composite kernel distinguish meaningful tables from decorative ones with high accuracy. A series of experiments show that the proposed method achieves state-of-the-art performance.  相似文献   

8.
A large number of web pages contain data structured in the form of ??lists??. Many such lists can be further split into multi-column tables, which can then be used in more semantically meaningful tasks. However, harvesting relational tables from such lists can be a challenging task. The lists are manually generated and hence need not have well-defined templates??they have inconsistent delimiters (if any) and often have missing information. We propose a novel technique for extracting tables from lists. The technique is domain independent and operates in a fully unsupervised manner. We first use multiple sources of information to split individual lines into multiple fields and then, compare the splits across multiple lines to identify and fix incorrect splits and bad alignments. In particular, we exploit a corpus of HTML tables, also extracted from the web, to identify likely fields and good alignments. For each extracted table, we compute an extraction score that reflects our confidence in the table??s quality. We conducted an extensive experimental study using both real web lists and lists derived from tables on the web. The experiments demonstrate the ability of our technique to extract tables with high accuracy. In addition, we applied our technique on a large sample of about 100,000 lists crawled from the web. The analysis of the extracted tables has led us to believe that there are likely to be tens of millions of useful and query-able relational tables extractable from lists on the web.  相似文献   

9.
Practice and alternation among a set of jobs are the characteristics of any jobbing industry. Learning and transfer of learning, which are the main human factors issues in practice and alternation, were investigated through laboratory simulation of two typical industrial information processing tasks. In the first experiment a location task was examined while a search task was investigated in experiment two. In both experiments two levels of task complexity, two groups of subjects, and two positions were combined in a 2 × 2 latin square formation with 800 trials in each task level. The results show that learning pattern appears to be task dependent, with quicker learning in location tasks than in search tasks. Learning also transfers differently for the tasks considered. Implications for the industrial training program for assembly tasks are discussed.  相似文献   

10.
Objective: In this paper, we present the results of two controlled experiments conducted to assess a new method based on think-pair-square in the distributed modeling of use case diagrams.Methods: This new method has been implemented within an integrated environment, which allows distributed synchronous modeling and communication among team members. To study the effect of the participants׳ familiarity with the method and the integrated environment, the second experiment is a replication conducted with the same participants as the original experiment. The results show a significant difference in favor of face-to-face (i.e., the chosen baseline) for the time to complete modeling tasks, with no significant impact on the quality of the produced models.Results: The results on participants׳ familiarity indicate a significant effect on the task completion time (i.e., more familiar participants spent less time), with no significant impact on quality.Practice: One of the most interesting practical implications of our study is - in case the time difference is not an issue, but moving people might be a problem, the new method and environment could represent a viable alternative to face-to-face. Another significant result is that also people not perfectly trained on our method and environment may benefit from their use: the training phase could be shortened or skipped. In addition, face-to-face is less prone to consolidate participants׳ working style and to develop a shared working habit of participants.Implications: This work is in the direction of the media-effect theories applied to requirements engineering. The results indicate that the participants in the experiments significantly spent less time when modeling use case diagrams using face-to-face. Conversely, no significant difference was observed on the quality of the artifacts produced by the participants in the these tasks.  相似文献   

11.
While techniques for evaluating the performance of lower-level document analysis tasks such as optical character recognition have gained acceptance in the literature, attempts to formalize the problem for higher-level algorithms, while receiving a fair amount of attention in terms of theory, have generally been less successful in practice, perhaps owing to their complexity. In this paper, we introduce intuitive, easy-to-implement evaluation schemes for the related problems of table detection and table structure recognition. We also present the results of several small experiments, demonstrating how well the methodologies work and the useful sorts of feedback they provide. We first consider the table detection problem. Here algorithms can yield various classes of errors, including non-table regions improperly labeled as tables (insertion errors), tables missed completely (deletion errors), larger tables broken into a number of smaller ones (splitting errors), and groups of smaller tables combined to form larger ones (merging errors). This leads naturally to the use of an edit distance approach for assessing the results of table detection. Next we address the problem of evaluating table structure recognition. Our model is based on a directed acyclic attribute graph, or table DAG. We describe a new paradigm, “graph probing,” for comparing the results returned by the recognition system and the representation created during ground-truthing. Probing is in fact a general concept that could be applied to other document recognition tasks as well. Received July 18, 2000 / Accepted October 4, 2001  相似文献   

12.
针对空间在轨服务任务需求设计了一种9自由度的模块化超冗余空间机械臂。该机械臂由9个相同的机械臂关节构成,其关节数量可根据任务要求进行调整。模块化关节采用一体化设计,关节内部合理地布置了机械传动部分与电气部分。基于改进的Bi-RRT算法和建立的机械臂的正逆运动学模型,机械臂进行了穿越复杂障碍环境的仿真和实验,实验结果表明该机械臂可以灵活地穿越障碍环境。基于阻抗控制算法,分别采用该机械臂进行了写字实验和恒力保持实验,实验结果表明机械臂具有良好的力控制能力。实验验证了该机械臂具备在复杂空间环境中执行在轨服务的能力。  相似文献   

13.
传统的压气机实验测试方式已不能满足高性能航空发动机研制的要求,为此,本文介绍了一个全台压气机实验实时控制、采集、处理系统;给出了该系统的硬件结构、实现技术及其实时软件设计。系统由微型计算机控制扫描阀、NEFF-620巡检设备、三向位移机构等设备协调运行,使探针自动移位与数据采集、处理同步进行,并以图、表即时输出实验结果。系统投入使用以来,实验时间和成本均降低了50%以上,测试精度0.3%。  相似文献   

14.
Domain-specific languages (DSLs) allow developers to write code at a higher level of abstraction compared with general-purpose languages (GPLs). Developers often use DSLs to reduce the complexity of GPLs. Our previous study found that developers performed program comprehension tasks more accurately and efficiently with DSLs than with corresponding APIs in GPLs. This study replicates our previous study to validate and extend the results when developers use IDEs to perform program comprehension tasks. We performed a dependent replication of a family of experiments. We made two specific changes to the original study: (1) participants used IDEs to perform the program comprehension tasks, to address a threat to validity in the original experiment and (2) each participant performed program comprehension tasks on either DSLs or GPLs, not both as in the original experiment. The results of the replication are consistent with and expanded the results of the original study. Developers are significantly more effective and efficient in tool-based program comprehension when using a DSL than when using a corresponding API in a GPL. The results indicate that, where a DSL is available, developers will perform program comprehension better using the DSL than when using the corresponding API in a GPL.  相似文献   

15.
Abstract

Problem-solving performance with tabular and graphical computer displays was examined as problem type, number progression, and memory capacity were systematically manipulated. Participants used tables and line graphs that depicted linear or multilinear number progressions to solve location, interpolation, trend analysis, and forecasting problems. Experiment 1, in which the displayed information was continuously available, indicated that participants' performance for identifying specific values was better with tables than with graphs. For trend analysis and interpolation problems graphs with multilinear data facilitated performance. While the forecasting tasks did not show any systematic effect of the factors. In Experiment 2, the displayed information was not continuously available, participants performed best with the graphical displays for most conditions. These results are discussed in terms of designing computer information displays.  相似文献   

16.
《Ergonomics》2012,55(11):1019-1032
Two experiments were conducted investigating the movement patterns produced in the completion of aiming responses. Movement displacement, velocity and acceleration patterns were examined in the first experiment in an attempt to determine the control processes used in discrete, peg transfer and reciprocal tapping tasks. The kinematic parameters indicated that each of these tasks were characterized by discrete error corrections occurring near the target. Experiment 2 demonstrated that under high index of difficulty conditions responses are characterized by multiple discrete corrections designed to eliminate the discrepancy between the position of the hand and the target. These findings are discussed in relation to a discrete feedback interpretation of Fitts' law.  相似文献   

17.
The Semantic Web is distributed yet interoperable: Distributed since resources are created and published by a variety of producers, tailored to their specific needs and knowledge; Interoperable as entities are linked across resources, allowing to use resources from different providers in concord. Complementary to the explicit usage of Semantic Web resources, embedding methods made them applicable to machine learning tasks. Subsequently, embedding models for numerous tasks and structures have been developed, and embedding spaces for various resources have been published. The ecosystem of embedding spaces is distributed but not interoperable: Entity embeddings are not readily comparable across different spaces. To parallel the Web of Data with a Web of Embeddings, we must thus integrate available embedding spaces into a uniform space.Current integration approaches are limited to two spaces and presume that both of them were embedded with the same method — both assumptions are unlikely to hold in the context of a Web of Embeddings. In this paper, we present FedCoder— an approach that integrates multiple embedding spaces via a latent space. We assert that linked entities have a similar representation in the latent space so that entities become comparable across embedding spaces. FedCoder employs an autoencoder to learn this latent space from linked as well as non-linked entities.Our experiments show that FedCoder substantially outperforms state-of-the-art approaches when faced with different embedding models, that it scales better than previous methods in the number of embedding spaces, and that it improves with more graphs being integrated whilst performing comparably with current approaches that assumed joint learning of the embeddings and were, usually, limited to two sources. Our results demonstrate that FedCoder is well adapted to integrate the distributed, diverse, and large ecosystem of embeddings spaces into an interoperable Web of Embeddings.  相似文献   

18.
Abstract Sixty-five teams of basic and secondary school students solved problem-solving tasks during a virtual hike in a Web-based inquiry-learning simulation 'Hiking Across Estonia'. This environment provided learners with all necessary background information for problem-solving and tools for carrying out experiments. There were 25 tasks in certain order about ecological and environmental issues. The teams were clustered according to the data about participants, the results of the pre- and post-test, and their achievement in problem-solving tasks of the virtual hike. Only two out of five clusters were regarded as effective in solving problems and analysing tables, graphs, figures, and photos. The others had difficulties in forming contextual or task and process awareness. A support system for increasing the effectiveness of inquiry learning and enhancing their development of analytical skills was developed on the basis of the strategies that the members of five clusters had used in solving the problems, their achievement in solving the tasks during the virtual hike and in the pre- and post-test, and the personal data about the teams. The support system contained different notes before solving the problems and changed sequence of the tasks on the virtual hike for some clusters. The usage of this system was evaluated in a second study with 60 teams. The comparison of two studies demonstrated significant effectiveness of the support system to both general problem-solving ability and analytical skills. The characteristics of each cluster and the influence of the support system are discussed in this paper.  相似文献   

19.
An important objective of data mining is the development of predictive models. Based on a number of observations, a model is constructed that allows the analysts to provide classifications or predictions for new observations. Currently, most research focuses on improving the accuracy or precision of these models and comparatively little research has been undertaken to increase their comprehensibility to the analyst or end-user. This is mainly due to the subjective nature of ‘comprehensibility’, which depends on many factors outside the model, such as the user's experience and his/her prior knowledge. Despite this influence of the observer, some representation formats are generally considered to be more easily interpretable than others. In this paper, an empirical study is presented which investigates the suitability of a number of alternative representation formats for classification when interpretability is a key requirement. The formats under consideration are decision tables, (binary) decision trees, propositional rules, and oblique rules. An end-user experiment was designed to test the accuracy, response time, and answer confidence for a set of problem-solving tasks involving the former representations. Analysis of the results reveals that decision tables perform significantly better on all three criteria, while post-test voting also reveals a clear preference of users for decision tables in terms of ease of use.  相似文献   

20.
For working tasks with high visual demand, ergonomic design of the working stations requires defining criteria for comparative evaluation and analysis of the visual perceptibility in different regions of the workspace. This paper provides kinematic models of visual acuity and motion resolvability as adopted measures of visual perceptibility of the workspace. The proposed models have been examined through two sets of experiments. The first experiment is designed to compare the models outputs with those from experiments. Time measurements of the participants’ response to visual events are employed for calculation of the perceptibility measures. The overall comparison results show similar patterns and moderate statistical errors of the measured and kinematically modeled values of the parameters. In the second experiment, the proposed set of visual perceptibility measures are examined for a simulated industrial task of inserting electronic chips into slots of a working table, resembling a fine assembly line of transponders manufacturing. The results from ANOVA tests for the visual acuity and the motion resolvability justify the postures adopted by the participants using visual perceptibility measures for completing the insertion tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号