首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Previous computerized productivity measurement models to assist firms in computing productivity measures from a set of input data have been constructed using procedural languages, primarily Fortran and Basic. These models have a number of shortcomings which have detracted from their usefulness. First, these models must be modified to adapt them to the organization and data available from a particular firm. Modification is expensive and time consuming since it requires a detailed knowledge of the structure of the model and the language in which it is programmed in addition to a detailed knowledge of the firm. Second, data entry is not only difficult and time consuming but the user has no indication of what is happening between the input of data and the output of the final productivity measure.

This paper describes the development of an interactive multifactor productivity measurement model using Lotus 123. With the spreadsheet software the model can be easily adapted to fit the needs of the firm by the firm's industrial engineer with a only working knowledge of Lotus 123. Changes in data can be made easily by using the features of the spreadsheet software, and the effects of the changes can be seen easily and rapidly on various aspects of the model.  相似文献   


2.
Errors are prevalent in spreadsheets and can be extremely difficult to find. A number of audits of existing spreadsheets have been reported, but few details have been given about how the audits were performed. We developed and tested a new spreadsheet auditing protocol designed to find errors in operational spreadsheets. Our work showed which auditing procedures, used in what sequence and combination, were most effective across a wide range of spreadsheets. It also provided useful information on the size and complexity of operational spreadsheets, as well as the frequency with which certain types of errors occur.  相似文献   

3.
为提升Web报表系统中公式计算的效率,建立了公式计算性能优化的模型.提出了一种公式间依赖关系分析的方法,自适应构建公式间的依赖关系图;在构建的依赖关系图的基础上,进一步提出了高效的层次化拓扑排序算法,极大的提高了报表中公式计算效率,减小报表系统每张报表的表内公式计算的总执行时间.理论分析和实验结果表明,该模型具有较强的可行性和算法高效性.  相似文献   

4.
Past methods of integrating different types of stratigraphic and lithofacies maps relied on the superimposition of isolines, which were difficult to interpret. By normalizing the data of various basemaps so that each has the same range of values, compound dimensionless values are obtained which assist in the recognition and interpretation of trends. Data processing is performed easily on a spreadsheet, which has the advantage that the effect of weighting the different maps can be observed immediately.  相似文献   

5.
In a typical COBOL program, the data division consists of 50% of the lines of code. Automatic type inference can help to understand the large collections of variable declarations contained therein, showing how variables are related based on their actual usage. The most problematic aspect of type inference is pollution, the phenomenon that types become too large, and contain variables that intuitively should not belong to the same type. The aim of the paper is to provide empirical evidence for the hypothesis that the use of subtyping is an effective way for dealing with pollution. The main results include a tool set to carry out type inference experiments, a suite of metrics characterizing type inference outcomes, and the experimental observation that only one instance of pollution occurs in the case study conducted.  相似文献   

6.
We show how to use a spreadsheet to calculate numerical solutions of the one-dimensional time-dependent heat-conduction equation. We find the spreadsheet to be a practical tool for numerical calculations, because the algorithms can be implemented simply and quickly without complicated programming, and the spreadsheet utilities can be used not only for graphics, printing, and file management, but also for advanced mathematical operations.We implement the explicit and the Crank-Nicholson forms of the finite-difference approximations and discuss the geological applications of both methods. We also show how to adjust these two algorithms to a nonhomogeneous lithosphere in which the thermal properties (thermal conductivity, density, and radioactive heat generation) change from the upper crust to the lower crust and to the mantle.The solution is presented in a way that can fit any spreadsheet (Lotus-123, Quattro-Pro, Excel). In addition, a Quattro-Pro program with macros that calculate and display the thermal evolution of the lithosphere after a thermal perturbation is enclosed in an appendix.  相似文献   

7.
A method and results of static and dynamic analysis of Pascal programs are described. In order to investigate characteristics of large systems programs developed by the stepwise refinement programming approach and written in Pascal, several Pascal compilers written in Pascal were analysed from both static and dynamic points of view. As a main conclusion, procedures play an important role in the stepwise refinement approach and implementors of a compiler and designers of high level language machines for Pascal-like languages should pay careful attention to this point. The set data structure is one of the characteristics of the Pascal language and statistics of set operations are also described.  相似文献   

8.
We present a reasoning system for inferring dimension information in spreadsheets. This system can be used to check the consistency of spreadsheet formulas and thus is able to detect errors in spreadsheets.Our approach is based on three static analysis components. First, the spatial structure of the spreadsheet is analyzed to infer a labeling relationship among cells. Second, cells that are used as labels are lexically analyzed and mapped to potential dimensions. Finally, dimension information is propagated through spreadsheet formulas. An important aspect of the rule system defining dimension inference is that it works bi-directionally, that is, not only “downstream” from referenced arguments to the current cell, but also “upstream” in the reverse direction. This flexibility makes the system robust and turns out to be particularly useful in cases when the initial dimension information that can be inferred from headers is incomplete or ambiguous.We have implemented a prototype system as an add-in to Excel. In an evaluation of this implementation we were able to detect dimension errors in almost 50% of the investigated spreadsheets, which shows (i) that the system works reliably in practice and (ii) that dimension information can be well exploited to uncover errors in spreadsheets.  相似文献   

9.
An empirical study of sentiment analysis for chinese documents   总被引:1,自引:0,他引:1  
Up to now, there are very few researches conducted on sentiment classification for Chinese documents. In order to remedy this deficiency, this paper presents an empirical study of sentiment categorization on Chinese documents. Four feature selection methods (MI, IG, CHI and DF) and five learning methods (centroid classifier, K-nearest neighbor, winnow classifier, Naïve Bayes and SVM) are investigated on a Chinese sentiment corpus with a size of 1021 documents. The experimental results indicate that IG performs the best for sentimental terms selection and SVM exhibits the best performance for sentiment classification. Furthermore, we found that sentiment classifiers are severely dependent on domains or topics.  相似文献   

10.
多维数据计算是联机分析处理(OLAP)应用经常使用的,但传统SQL却缺乏这方面的支持。本文讨论在DM-DW数据仓库原型系统中设计Spreadsheet计算引擎来解决这一问题,并且通过设计Spreadsheet子句扩展SQL的表达能力,更加有效地表示这种计算。  相似文献   

11.
Robert P. Cook  Insup Lee 《Software》1982,12(2):195-203
More than 120,000 lines of Pascal programs, written by graduate students and faculty members, have been statically analysed to provide a better understanding of how the language is ‘really’ used. The analysis was done within twelve distinct contexts to discover differences in usage patterns among the various contexts. For example, it was found that 47 per cent of the operands in arguments lists were constants. The results are displayed as tables of frequency counts which show how often each construct is used within a context. Also, we have compared our findings to the results from studies of other languages, such as FORTRAN, SAL and XPL.  相似文献   

12.
This paper describes results obtained from a static analysis of 340 COBOL programs collected from commercial and industrial installations. The analysis was performed by a syntax analyser designed specifically to analyse source program statements, gather detailed information and produce a report on the definition and use of data and language in the programs analysed.  相似文献   

13.
An empirical analysis of FORTRAN programs   总被引:1,自引:0,他引:1  
  相似文献   

14.
The present study investigated the effects of multi-media modules and their combinations on the learning of procedural tasks. In the experiment, 72 participants were classified as having either low- or high spatial ability based on their spatial ability test. They were randomly assigned to one of the six experimental conditions in a 2 × 3 factorial design with verbal modality (on-screen text procedure vs. auditory procedure) and the format of visual representation (static visual representation vs. static visual representation with motion cues vs. animated visual representation). After they completed their learning session, the ability to perform the procedural task was directly measured in a realistic setting. The results revealed that: (1) in the condition of static visual representation, the high spatial ability group outperformed the low spatial ability group, (2) for the low spatial ability participants, the animated visual representation group outperformed the static visual representation group, however, the static visual representation with motion cues group did not outperform the static visual representation group, (3) the use of animated visual representation helped participants with low spatial ability more than those with high spatial ability, and (4) a modality effect was found for the measure of satisfaction when viewing the animated visual representation. Since the participants with low spatial ability benefited from the use of animation, the results might support an idea that people are better able to retrieve the procedural information by viewing animated representation. The findings also might reflect a preference for the auditory mode of presentation with greater familiarity with the type of visual representation.  相似文献   

15.
Error flow analysis and testing techniques focus on the introduction of errors through code faults into data states of an executing program, and their subsequent cancellation or propagation to output. The goals and limitations of several error flow techniques are discussed, including mutation analysis, fault-based testing, PIE analysis, and dynamic impact analysis. The attributes desired of a good error flow technique are proposed, and a model called dynamic error flow analysis (DEFA) is described that embodies many of these attributes. A testing strategy is proposed that uses DEFA information to select an optimal set of test paths and to quantify the results of successful testing. An experiment is presented that illustrates this testing strategy. In this experiment, the proposed testing strategy outperforms mutation testing in catching arbitrary data state errors.  相似文献   

16.
The size of today’s programs continues to grow, as does the number of bugs they contain. Testing alone is rarely able to flush out all bugs, and many lurk in difficult-to-test corner cases. An important alternative is static analysis, in which correctness properties of a program are checked without running it. While it cannot catch all errors, static analysis can catch many subtle problems that testing would miss.We propose a new space of abstractions for pointer analysis—an important component of static analysis for C and similar languages. We identify two main components of any abstraction—how to model statement order and how to model conditionals, then present a new model of programs that enables us to explore different abstractions in this space. Our assign-fetch graph represents reads and writes to memory instead of traditional points-to relations and leads to concise function summaries that can be used in any context. Its flexibility supports many new analysis techniques with different trade-offs between precision and speed.We present the details of our abstraction space, explain where existing algorithms fit, describe a variety of new analysis algorithms based on our assign-fetch graphs, and finally present experimental results that show our flow-aware abstraction for statement ordering both runs faster and produces more precise results than traditional flow-insensitive analysis.  相似文献   

17.
Concept location, the problem of associating human oriented concepts with their counterpart solution domain concepts, is a fundamental problem that lies at the heart of software comprehension. Recent research has attempted to alleviate the impact of the concept location problem through the application of methods drawn from the information retrieval (IR) community. Here we present a new approach based on a complimentary IR method which also has a sound basis in cognitive theory. We compare our approach to related work through an experiment and present our conclusions. This research adapts and expands upon existing language modelling frameworks in IR for use in concept location, in software systems. In doing so it is novel in that it leverages implicit information available in system documentation. Surprisingly, empirical evaluation of this approach showed little performance benefit overall and several possible explanations are forwarded for this finding.
Michael EnglishEmail:
  相似文献   

18.
A system ofannotated types is proposed as a means of describing and inferring static information, such as strictness and constancy, about functional programs. An abstract semantics is given in terms ofprojections. A close connection between annotated type assignment and projection analysis is demonstrated.  相似文献   

19.
This is the first empirical study of the use of the C macro preprocessor, Cpp. To determine how the preprocessor is used in practice, this paper analyzes 26 packages comprising 1.4 million lines of publicly available C code. We determine the incidence of C preprocessor usage-whether in macro definitions, macro uses, or dependences upon macros-that is complex, potentially problematic, or inexpressible in terms of other C or C++ language features. We taxonomize these various aspects of preprocessor use and particularly note data that are material to the development of tools for C or C++, including translating from C to C++ to reduce preprocessor usage. Our results show that, while most Cpp usage follows fairly simple patterns, an effective program analysis tool must address the preprocessor. The intimate connection between the C programming language and Cpp, and Cpp's unstructured transformations of token streams often hinder both programmer understanding of C programs and tools built to engineer C programs, such as compilers, debuggers, call graph extractors, and translators. Most tools make no attempt to analyze macro usage, but simply preprocess their input, which results in a number of negative consequences; an analysis that takes Cpp into account is preferable, but building such tools requires an understanding of actual usage. Differences between the semantics of Cpp and those of C can lead to subtle bugs stemming from the use of the preprocessor, but there are no previous reports of the prevalence of such errors. Use of C++ can reduce some preprocessor usage, but such usage has not been previously measured. Our data and analyses shed light on these issues and others related to practical understanding or manipulation of real C programs. The results are of interest to language designers, tool writers, programmers, and software engineers.  相似文献   

20.
This research studies factor analysis of traditional survey- questionnaires for interactive software evaluation in order to construct a Principal Factor Conversion Function (PFCF). Such a PFCF may be used in several ways: first, for exploratory purposes, to discover the principal factors of the traditional questionnaire; second to test the potential of data reduction within significant loss of information; and third to compare the traditional manual evaluation to a condensed computerized interactive software evaluation. A CAI system was used as the software to be evaluated in this research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号