首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Previous computerized productivity measurement models to assist firms in computing productivity measures from a set of input data have been constructed using procedural languages, primarily Fortran and Basic. These models have a number of shortcomings which have detracted from their usefulness. First, these models must be modified to adapt them to the organization and data available from a particular firm. Modification is expensive and time consuming since it requires a detailed knowledge of the structure of the model and the language in which it is programmed in addition to a detailed knowledge of the firm. Second, data entry is not only difficult and time consuming but the user has no indication of what is happening between the input of data and the output of the final productivity measure.

This paper describes the development of an interactive multifactor productivity measurement model using Lotus 123. With the spreadsheet software the model can be easily adapted to fit the needs of the firm by the firm's industrial engineer with a only working knowledge of Lotus 123. Changes in data can be made easily by using the features of the spreadsheet software, and the effects of the changes can be seen easily and rapidly on various aspects of the model.  相似文献   


2.
Errors are prevalent in spreadsheets and can be extremely difficult to find. A number of audits of existing spreadsheets have been reported, but few details have been given about how the audits were performed. We developed and tested a new spreadsheet auditing protocol designed to find errors in operational spreadsheets. Our work showed which auditing procedures, used in what sequence and combination, were most effective across a wide range of spreadsheets. It also provided useful information on the size and complexity of operational spreadsheets, as well as the frequency with which certain types of errors occur.  相似文献   

3.
Web services offer a more reliable and efficient way to access online data than scraping web pages. However, interacting with web services to retrieve data often requires people to write a lot of code. Moreover, many web services return data in complex hierarchical structures that make it difficult for people to perform any further data manipulation. We developed Gneiss, a tool that extends the familiar spreadsheet metaphor to support using structured web service data. Gneiss lets users retrieve or stream arbitrary JSON data returned from web services to a spreadsheet using interaction techniques without writing any code. It introduces a novel visualization that represents hierarchies in data using nested spreadsheet cells and allows users to easily reshape and regroup the extracted structured data. Data flow is two-way between the spreadsheet and the web services, enabling people to easily make a new web service call and retrieve new data by modifying spreadsheet cells. We report results form a user study that showed that Gneiss helped spreadsheet users use and analyze structured data more efficiently than Excel and even outperform professional programmers writing code. We further use a set of examples to demonstrate our tool's ability to create reusable data extraction and manipulation programs that work with complex web service data.  相似文献   

4.
为提升Web报表系统中公式计算的效率,建立了公式计算性能优化的模型.提出了一种公式间依赖关系分析的方法,自适应构建公式间的依赖关系图;在构建的依赖关系图的基础上,进一步提出了高效的层次化拓扑排序算法,极大的提高了报表中公式计算效率,减小报表系统每张报表的表内公式计算的总执行时间.理论分析和实验结果表明,该模型具有较强的可行性和算法高效性.  相似文献   

5.
Spreadsheets are very widely used at various levels of the organization. Studies have shown that errors do occur frequently during the development of spreadsheet models. Empirical studies of operational spreadsheet models show that a large percentage of them contain errors. However, the identification of errors is difficult and tedious, and errors have led to drastically wrong decisions. It is thus important to develop new strategies and auditing tools for detecting errors. A suite of new auditing visualization tools have been designed and implemented in Visual Basic for Application (VBA), as an add-in module for easy inclusion in any Excel 97 or Excel 2000 installation. Furthermore, four strategies are proposed for detecting errors. These range from an overview strategy to identify logical components of the spreadsheet model, to specific strategies targeted at specific types of error. Illustrations show how these strategies can be supported with the new visualization tools.  相似文献   

6.
Past methods of integrating different types of stratigraphic and lithofacies maps relied on the superimposition of isolines, which were difficult to interpret. By normalizing the data of various basemaps so that each has the same range of values, compound dimensionless values are obtained which assist in the recognition and interpretation of trends. Data processing is performed easily on a spreadsheet, which has the advantage that the effect of weighting the different maps can be observed immediately.  相似文献   

7.
We show how to use a spreadsheet to calculate numerical solutions of the one-dimensional time-dependent heat-conduction equation. We find the spreadsheet to be a practical tool for numerical calculations, because the algorithms can be implemented simply and quickly without complicated programming, and the spreadsheet utilities can be used not only for graphics, printing, and file management, but also for advanced mathematical operations.We implement the explicit and the Crank-Nicholson forms of the finite-difference approximations and discuss the geological applications of both methods. We also show how to adjust these two algorithms to a nonhomogeneous lithosphere in which the thermal properties (thermal conductivity, density, and radioactive heat generation) change from the upper crust to the lower crust and to the mantle.The solution is presented in a way that can fit any spreadsheet (Lotus-123, Quattro-Pro, Excel). In addition, a Quattro-Pro program with macros that calculate and display the thermal evolution of the lithosphere after a thermal perturbation is enclosed in an appendix.  相似文献   

8.
In a typical COBOL program, the data division consists of 50% of the lines of code. Automatic type inference can help to understand the large collections of variable declarations contained therein, showing how variables are related based on their actual usage. The most problematic aspect of type inference is pollution, the phenomenon that types become too large, and contain variables that intuitively should not belong to the same type. The aim of the paper is to provide empirical evidence for the hypothesis that the use of subtyping is an effective way for dealing with pollution. The main results include a tool set to carry out type inference experiments, a suite of metrics characterizing type inference outcomes, and the experimental observation that only one instance of pollution occurs in the case study conducted.  相似文献   

9.
A method and results of static and dynamic analysis of Pascal programs are described. In order to investigate characteristics of large systems programs developed by the stepwise refinement programming approach and written in Pascal, several Pascal compilers written in Pascal were analysed from both static and dynamic points of view. As a main conclusion, procedures play an important role in the stepwise refinement approach and implementors of a compiler and designers of high level language machines for Pascal-like languages should pay careful attention to this point. The set data structure is one of the characteristics of the Pascal language and statistics of set operations are also described.  相似文献   

10.
We present a reasoning system for inferring dimension information in spreadsheets. This system can be used to check the consistency of spreadsheet formulas and thus is able to detect errors in spreadsheets.Our approach is based on three static analysis components. First, the spatial structure of the spreadsheet is analyzed to infer a labeling relationship among cells. Second, cells that are used as labels are lexically analyzed and mapped to potential dimensions. Finally, dimension information is propagated through spreadsheet formulas. An important aspect of the rule system defining dimension inference is that it works bi-directionally, that is, not only “downstream” from referenced arguments to the current cell, but also “upstream” in the reverse direction. This flexibility makes the system robust and turns out to be particularly useful in cases when the initial dimension information that can be inferred from headers is incomplete or ambiguous.We have implemented a prototype system as an add-in to Excel. In an evaluation of this implementation we were able to detect dimension errors in almost 50% of the investigated spreadsheets, which shows (i) that the system works reliably in practice and (ii) that dimension information can be well exploited to uncover errors in spreadsheets.  相似文献   

11.
An empirical study of sentiment analysis for chinese documents   总被引:1,自引:0,他引:1  
Up to now, there are very few researches conducted on sentiment classification for Chinese documents. In order to remedy this deficiency, this paper presents an empirical study of sentiment categorization on Chinese documents. Four feature selection methods (MI, IG, CHI and DF) and five learning methods (centroid classifier, K-nearest neighbor, winnow classifier, Naïve Bayes and SVM) are investigated on a Chinese sentiment corpus with a size of 1021 documents. The experimental results indicate that IG performs the best for sentimental terms selection and SVM exhibits the best performance for sentiment classification. Furthermore, we found that sentiment classifiers are severely dependent on domains or topics.  相似文献   

12.
多维数据计算是联机分析处理(OLAP)应用经常使用的,但传统SQL却缺乏这方面的支持。本文讨论在DM-DW数据仓库原型系统中设计Spreadsheet计算引擎来解决这一问题,并且通过设计Spreadsheet子句扩展SQL的表达能力,更加有效地表示这种计算。  相似文献   

13.
Robert P. Cook  Insup Lee 《Software》1982,12(2):195-203
More than 120,000 lines of Pascal programs, written by graduate students and faculty members, have been statically analysed to provide a better understanding of how the language is ‘really’ used. The analysis was done within twelve distinct contexts to discover differences in usage patterns among the various contexts. For example, it was found that 47 per cent of the operands in arguments lists were constants. The results are displayed as tables of frequency counts which show how often each construct is used within a context. Also, we have compared our findings to the results from studies of other languages, such as FORTRAN, SAL and XPL.  相似文献   

14.
This article presents an empirical study devoted to characterize the computational efficiency behavior of an evolutionary algorithm (usually called canonical) as a C program. The study analyzes the effects of several implementation decisions on the execution time of the resulting evolutionary algorithm. The implementation decisions studied include: memory utilization (using dynamic vs. static variables and local vs. global variables), methods for ordering the population, code substitution mechanisms, and the routines for generating pseudorandom numbers within the evolutionary algorithm. The results obtained in the experimental analysis allow us to conclude that significant improvements in efficiency can be gained by applying simple guidelines to best program an evolutionary algorithm in C. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
This paper describes results obtained from a static analysis of 340 COBOL programs collected from commercial and industrial installations. The analysis was performed by a syntax analyser designed specifically to analyse source program statements, gather detailed information and produce a report on the definition and use of data and language in the programs analysed.  相似文献   

16.
An empirical analysis of FORTRAN programs   总被引:1,自引:0,他引:1  
  相似文献   

17.
The present study investigated the effects of multi-media modules and their combinations on the learning of procedural tasks. In the experiment, 72 participants were classified as having either low- or high spatial ability based on their spatial ability test. They were randomly assigned to one of the six experimental conditions in a 2 × 3 factorial design with verbal modality (on-screen text procedure vs. auditory procedure) and the format of visual representation (static visual representation vs. static visual representation with motion cues vs. animated visual representation). After they completed their learning session, the ability to perform the procedural task was directly measured in a realistic setting. The results revealed that: (1) in the condition of static visual representation, the high spatial ability group outperformed the low spatial ability group, (2) for the low spatial ability participants, the animated visual representation group outperformed the static visual representation group, however, the static visual representation with motion cues group did not outperform the static visual representation group, (3) the use of animated visual representation helped participants with low spatial ability more than those with high spatial ability, and (4) a modality effect was found for the measure of satisfaction when viewing the animated visual representation. Since the participants with low spatial ability benefited from the use of animation, the results might support an idea that people are better able to retrieve the procedural information by viewing animated representation. The findings also might reflect a preference for the auditory mode of presentation with greater familiarity with the type of visual representation.  相似文献   

18.
Error flow analysis and testing techniques focus on the introduction of errors through code faults into data states of an executing program, and their subsequent cancellation or propagation to output. The goals and limitations of several error flow techniques are discussed, including mutation analysis, fault-based testing, PIE analysis, and dynamic impact analysis. The attributes desired of a good error flow technique are proposed, and a model called dynamic error flow analysis (DEFA) is described that embodies many of these attributes. A testing strategy is proposed that uses DEFA information to select an optimal set of test paths and to quantify the results of successful testing. An experiment is presented that illustrates this testing strategy. In this experiment, the proposed testing strategy outperforms mutation testing in catching arbitrary data state errors.  相似文献   

19.
ContextThere are many claimed advantages for the use of design patterns and their impact on software quality. However, there is no enough empirical evidence that supports these claimed benefits and some studies have found contrary results.ObjectiveThis empirical study aims to quantitatively measure and compare the fault density of motifs of design patterns in object-oriented systems at different levels: design level, category level, motif level, and role level.MethodAn empirical study was conducted that involved five open-source software systems. Data were analyzed using appropriate statistical test of significance differences.ResultsThere is no consistent difference in fault density between classes that participate in design motifs and non-participant classes. However, classes that participate in structural design motifs tend to be less fault-dense. For creational design motifs, it was found that there is no clear tendency for the difference in fault density. For behavioral design motifs, it was found that there is no significant difference between participant classes and non-participant classes. We observed associations between five design motifs (Builder, Factory Method, Adapter, Composite and Decorator) and fault density. At the role level, we found that only one pair of roles (Adapter vs. Client) shows a significant difference in fault density.ConclusionThere is no clear tendency for the difference in fault density between participant and non-participant classes in design motifs. However, structural design motifs have a negative association with fault density. The Builder design motif has a positive association with fault density whilst the Factory Method, Adapter, Composite, and Decorator design motifs have negative associations with fault density. Classes that participate in the Adapter role are less dense in faults than classes that participate in the Client role.  相似文献   

20.
提出了一种基于奇异谱分析(SSA)的经验模态分解(EMD)去噪方法。该方法先对带噪信号进行EMD分解,得到若干个本征模态函数(IMF)。再通过SSA对每个IMF分量进行去噪处理:把第一个IMF分量作为高频噪声,并根据它计算出剩余IMF中所含的噪声能量,从而得到剩下的每个IMF中信号所占的能量比值。然后选择合适的窗口长度,对每个IMF进行SSA变换,根据IMF中信号所占的能量比值选择合适的奇异值分解(SVD)分量重构,得到去噪后的IMF。再将所有重构得到的IMF分量以及余项相加,得到最终去噪后的信号。经过实验,对比研究了该方法与小波软阈值、EMD软阈值和EMD滤波方法的去噪效果,结果表明该方法整体优于其它方法,是一种有效的信号去噪方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号