首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
There is often a need to display logged information textually for real-time event-based supervisory tasks. Textual display design can follow several directions that reflect a tradeoff between a visual load and an operational load. The study reported here was designed in order to examine this tradeoff and its implications for such display design. An event-based monitoring and handling task was used with different event types having either a high or a low handling priority. The events were presented in four different display configurations varying in their degree of visual and operational load. The specific performance indices were event dwelling times, event handling proportion, and handling errors. In general it was found that the high priority events were handled faster and more accurately than the low priority events. In addition, performance with the various display configurations was dependent upon event type. These findings are discussed in terms of visual vs. operational load tradeoff and its context-sensitivity. Some implications for display design and further research on event presentation approaches are discussed.  相似文献   

2.
The author presents the results of an experimental investigation into the comparative usefulness of textual tools and graphical tools for the program understanding phase of Cobol program maintenance. Both novice and experienced programmers are used as subjects. The results show a slight superiority for graphical tools when they are used by less experienced programmers. They cast doubt on the importance of rigid adherence to program design methodologies for experienced programmers and on the extensibility of experiments using relatively inexperienced student subjects  相似文献   

3.
Optimizing the energy consumption of robot movements becomes one of the increasingly important issues in industry. Minimizing a robot's movement has been identified as one of the strategies to improve energy efficiency in robotic systems. In this paper, a mathematical model of the total energy consumption of cycle pick-and-place tasks is proposed, which considers operating motion and homing motion of a given trajectory with different joint configurations. Optimal joint configurations for cyclic pick-and-place tasks are investigated in order to maximize energy saving. Finally, a case study is described to illustrate the proposed method and the results show that compared with fixed joint configurations, the proposed method based on flexible joint configurations reduces the energy consumption by 10.33%.  相似文献   

4.
This paper presents the investigation of a head-mounted display as an innovative interface on the shop floor. The display used is voice controlled and designed to support operators of next-generation manufacturing systems in critical supervisory tasks. The connection of the display to a modern numerical control is explained from a technical point of view. A usability study is presented, which investigates the user's perspective according to 13 skilled operators. This study focuses on the highly critical task of ‘approach’. The results indicate that the head-mounted display enables the operator to fulfil the control task without the necessity of conventional eye and head movements, which induce visual and motor strain. The majority of the participants are in favour of an innovative display technology as a component of future human-machine interfaces.  相似文献   

5.

There exist a variety of distance measures which operate on time series kernels. The objective of this article is to compare those distance measures in a support vector machine setting. A support vector machine is a state-of-the-art classifier for static (non-time series) datasets and usually outperforms k-Nearest Neighbour, however it is often noted that that 1-NN DTW is a robust baseline for time-series classification. Through a collection of experiments we determine that the most effective distance measure is Dynamic Time Warping and the most effective classifier is kNN. However, a surprising result is that the pairing of kNN and DTW is not the most effective model. Instead we have discovered via experimentation that Dynamic Time Warping paired with the Gaussian Support Vector Machine is the most accurate time series classifier. Finally, with good reason we recommend a slightly inferior (in terms of accuracy) model Time Warp Edit Distance paired with the Gaussian Support Vector Machine as it has a better theoretical basis. We also discuss the reduction in computational cost achieved by using a Support Vector Machine, finding that the Negative Kernel paired with the Dynamic Time Warping distance produces the greatest reduction in computational cost.

  相似文献   

6.
This paper presents an extensive and detailed experimental evaluation of XQuery processors. The study consists of running five publicly available XQuery benchmarks—the Michigan benchmark (MBench), XBench, XMach-1, XMark and X007—on six XQuery processors, three stand-alone (file-based) XQuery processors (Galax, Qizx/Open, Saxon-B) and three XML/XQuery database systems (BerkeleyDB/XML, MonetDB/XQuery, X-Hive/DB). Next to assessing and comparing the functionality, performance and scalability for the various systems, the major focus of this work is to report in detail about the experiences made while performing such an exhaustive study, to discuss all the problems that we encountered and how we solved them, and hence to hopefully provide some guidelines (or even a recipe) for performing reproducible large-scale experimental research and system evaluation.  相似文献   

7.
Source code documentation often contains summaries of source code written by authors. Recently, automatic source code summarization tools have emerged that generate summaries without requiring author intervention. These summaries are designed for readers to be able to understand the high-level concepts of the source code. Unfortunately, there is no agreed upon understanding of what makes up a “good summary.” This paper presents an empirical study examining summaries of source code written by authors, readers, and automatic source code summarization tools. This empirical study examines the textual similarity between source code and summaries of source code using Short Text Semantic Similarity metrics. We found that readers use source code in their summaries more than authors do. Additionally, this study finds that accuracy of a human written summary can be estimated by the textual similarity of that summary to the source code.  相似文献   

8.
Mutation testing is a fault-based technique for unit-level software testing. Weak mutation was proposed as a way to reduce the expense of mutation testing. Unfortunately, weak mutation is also expected to provide a weaker test of the software than mutation testing does. This paper presents results from an implementation of weak mutation, which we used to evaluate the effectiveness versus the efficiency of weak mutation. Additionally, we examined several options in an attempt to find the most appropriate way to implement weak mutation. Our results indicate that weak mutation can be applied in a manner that is almost as effective as mutation testing, and with significant computational savings  相似文献   

9.
《Displays》1993,14(1):11-20
Visual configurations for a head-mounted display (HMD) are evaluated with binocular asymmetry as the defining characteristic. With such a configuration the optimum compromise between binocular rivalry, weight and usefulness of see-through at night is sought. A dichoptic area of interest and the addition of a window-frame are shown to be simple and useful aspects of a binocular asymmetric HMD configuration. The dichoptic area of interest combines a large, low-resolution image in one eye with a small, high-resolution image in the other. A window-frame acts as a strong fusion lock and improves the visual quality of the binocular image for some configurations.  相似文献   

10.
《Computers & Geosciences》2006,32(8):1040-1051
Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.  相似文献   

11.
We study the practical behavior of different algorithms and methods that aim to estimate the intrinsic dimension (IDim) in metric spaces. Some of them were specifically developed to evaluate the complexity of searching in metric spaces, based on different theories related to the distribution of distances between objects on such spaces. Others were originally designed for vector spaces only, and have been extended to general metric spaces. To empirically evaluate the fitness of various IDim estimations with the actual difficulty of searching in metric spaces, we compare two representatives of each of the broadest families of metric indices: those based on pivots and those based on compact partitions. Our conclusions are that the estimators Distance Exponent and Correlation fit best their purpose.  相似文献   

12.
A comparative study of generalized cooccurrence texture analysis tools is presented. A generalized cooccurrence matrix (GCM) reflects the shape, size, and spatial arrangement of texture features. The particular texture features considered in this paper are 1) pixel-intensity, for which generalized cooccurrence reduces to traditional cooccurrence; 2) edge-pixel; and 3) extended-edges. Three experiments are discussed-the first based on a nearest neighbor classifier, the second on a linear discriminant classifier, and the third on the Battacharyya distance figure of merit.  相似文献   

13.
Flows of international capital to developing countries have fluctuated substantially over the last three decades. Empirical evidence concerning the main causes of international capital flows is, in general, mixed. There is strong support for the ‘push’ view that external factors have been important in driving capital inflows to emerging markets. However, the apparent importance of ‘push’ factors does not preclude the relevance of ‘pull’ phenomena. ‘Pull’ factors may be necessary to explain the geographic distribution of capital flows over time. During 1970–1990, international capital flows were mainly in the form of bank lending directed to governments and/or to the private sector. In the 1990s, capital flows took the form of foreign direct investment (FDI) and portfolio investment (PI), including bond and equity flows. The purpose of the paper is to examine the nature of foreign direct investment and portfolio investment, both of which help to finance investment and stimulate economic growth in the developing world. A quantitative classification of empirical international capital flows models forms the database for the paper. After classifying and describing the data, various theoretical and empirical model specifications used in the literature are reviewed analytically and empirically. A comparison of trends and volatilities in international capital flows for nine representative developing countries is given for 1977–2001.  相似文献   

14.
《Information & Management》1987,12(3):143-152
Strategic data bases (SDB) were prescribed by King and Cleland as the most practically useful way in which salient information could be incorporated into strategic planning processes. Critical success factors are one of several kinds of such strategic data bases, as are carefully-developed concise strength and weakness assessments. While the notion has great face validity, it has not been subjected to stringent empirical assessment. One such assessment is reported on here with the result that the SDB approach gives evidence of being useful and effective.  相似文献   

15.
The “Big 6” public accounting firms have invested considerable resources in the development of expert system (ES) for a variety of auditing tasks. However, the tasks for most existing auditing ESs appear to have been selected based on accessibility and cooperation of experts, and/or the judgmental evaluation of the developer, rather than a careful selection of the task from among a number of viable alternatives. A critical aspect of task suitability is the degree to which the characteristics of a candidate task match the capabilities of ES technology. In this study, a questionnaire was developed to obtain task-related information from practicing auditors in order to distinguish among candidate auditing tasks in terms of their suitability for ES development. Auditors in the “Big 6” public accounting firms were asked to provide ratings of the knowledge, data, and task characteristics of nine judgmental auditing tasks. The analysis of the data obtained from fifty-nine auditors revealed that the nine tasks were distinguishable in terms of their suitability for ES application. Two tasks, determining compliance with generally accepted accounting principles and audit work program development, were relatively better suited for ES application, while determination of the adequacy of an allowance and going concern evaluation were the tasks least suited for ES application. The fact that actual ES development efforts in the “Big 6” firms have emphasized the compliance and audit work program development tasks provides a degree of validation of our results.  相似文献   

16.
Knowledge visualization for evaluation tasks   总被引:1,自引:1,他引:0  
Although various methods for the evaluation of intelligent systems have been proposed in the past, almost no techniques are present that support the manual inspection of knowledge bases by the domain specialist. Manual knowledge base inspection is an important and frequently applied method in knowledge engineering. Since it can hardly be performed in an automated manner, it is a time-consuming and costly task. In this paper, we discuss a collection of appropriate visualization techniques that help developers to interactively browse and analyze the knowledge base in order to find deficiencies and semantic errors in their implementation. We describe standard visualization methods adapted to specifically support the analysis of the static knowledge base structure, but also of the usage of knowledge base objects such as questions or solutions. Additionally, we introduce a novel visualization technique that supports the validation of the derivation and interview behavior of a knowledge system in a semi-automatic manner. The application of the presented methods was motivated by the daily practice of knowledge base development.  相似文献   

17.
Classification algorithms are used in many domains to extract information from data, predict the entry probability of events of interest, and, eventually, support decision making. This paper explores the potential of extreme learning machines (ELM), a recently proposed type of artificial neural network, for consumer credit risk management. ELM possess some interesting properties, which might enable them to improve the quality of model-based decision support. To test this, we empirically compare ELM to established scoring techniques according to three performance criteria: ease of use, resource consumption, and predictive accuracy. The mathematical roots of ELM suggest that they are especially suitable as a base model within ensemble classifiers. Therefore, to obtain a holistic picture of their potential, we assess ELM in isolation and in conjunction with different ensemble frameworks. The empirical results confirm the conceptual advantages of ELM and indicate that they are a valuable alternative to other credit risk modelling methods.  相似文献   

18.
This paper presents an empirical evaluation on the methods of reducing the dimensionality of dissimilarity spaces for optimizing dissimilarity-based classifications (DBCs). One problem of DBCs is the high dimensionality of the dissimilarity spaces. To address this problem, two kinds of solutions have been proposed in the literature: prototype selection (PS) based methods and dimension reduction (DR) based methods. Although PS-based and DR-based methods have been explored separately by many researchers, not much analysis has been done on the study of comparing the two. Therefore, this paper aims to find a suitable method for optimizing DBCs by a comparative study. Our empirical evaluation, obtained with the two approaches for an artificial and three real-life benchmark databases, demonstrates that DR-based methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA) based methods, generally improve the classification accuracies more than PS-based methods. Especially, the experimental results demonstrate that PCA is more useful for the well-represented data sets, while LDA is more helpful for the small sample size problems.  相似文献   

19.
Several studies have demonstrated the superior performance of ensemble classification algorithms, whereby multiple member classifiers are combined into one aggregated and powerful classification model, over single models. In this paper, two rotation-based ensemble classifiers are proposed as modeling techniques for customer churn prediction. In Rotation Forests, feature extraction is applied to feature subsets in order to rotate the input data for training base classifiers, while RotBoost combines Rotation Forest with AdaBoost. In an experimental validation based on data sets from four real-life customer churn prediction projects, Rotation Forest and RotBoost are compared to a set of well-known benchmark classifiers. Moreover, variations of Rotation Forest and RotBoost are compared, implementing three alternative feature extraction algorithms: principal component analysis (PCA), independent component analysis (ICA) and sparse random projections (SRP). The performance of rotation-based ensemble classifier is found to depend upon: (i) the performance criterion used to measure classification performance, and (ii) the implemented feature extraction algorithm. In terms of accuracy, RotBoost outperforms Rotation Forest, but none of the considered variations offers a clear advantage over the benchmark algorithms. However, in terms of AUC and top-decile lift, results clearly demonstrate the competitive performance of Rotation Forests compared to the benchmark algorithms. Moreover, ICA-based Rotation Forests outperform all other considered classifiers and are therefore recommended as a well-suited alternative classification technique for the prediction of customer churn that allows for improved marketing decision making.  相似文献   

20.
《Ergonomics》2012,55(11):1413-1423
Dynamic task environments in supervisory control situations differ from those traditionally investigated in problem-solving research in that (1) several task goals exist in parallel, (2) task goals change dynamically as die behaviour of the technical process changes, and (3) information required to accomplish task goals changes across time. In the present work, it is suggested that such dynamic task environments can be described using two types of task goal networks, namely a control task goal (CTG) network and an information processing goal (IPG) network. CTG networks are generated by analysis of the operational states required to produce the commodity for which a technical system has been designed. For example, such analyses can be performed using approaches such as Mitchell's operator function model or canonical means-end analyses. IPG networks are generated by using the recenUy proposed functional information and knowledge acquisition (FIKA) modelling technique. Two examples from different domains illustrate how these task goal networks can be used to describe dynamic task environments. Finally, two different ways of using the task modelling approach are briefly discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号