共查询到20条相似文献,搜索用时 15 毫秒
1.
This essay continues my investigation of `syntactic semantics': the theory that, pace Searle's Chinese-Room Argument, syntax does suffice for semantics (in particular, for the semantics needed for a computational cognitive theory of natural-language understanding). Here, I argue that syntactic semantics (which is internal and first-person) is what has been called a conceptual-role semantics: The meaning of any expression is the role that it plays in the complete system of expressions. Such a `narrow', conceptual-role semantics is the appropriate sort of semantics to account (from an `internal', or first-person perspective) for how a cognitive agent understands language. Some have argued for the primacy of external, or `wide', semantics, while others have argued for a two-factor analysis. But, although two factors can be specified–-one internal and first-person, the other only specifiable in an external, third-person way–-only the internal, first-person one is needed for understanding how someone understands. A truth-conditional semantics can still be provided, but only from a third-person perspective. 相似文献
2.
The incorporation of global program analysis into recent compilers for Constraint Logic Programming (CLP) languages has greatly improved the efficiency of compiled programs. We present a global analyser based on abstract interpretation. Unlike traditional optimizers, whose designs tend to be ad hoc, the analyser has been designed with flexibility in mind. The analyser is incremental, allowing substantial program transformations by a compiler without requiring redundant re-computation of analysis data. The analyser is also generic in that it can perform a large number of different program analyses. Furthermore, the analyser has an object-oriented design, enabling it to be adapted to different applications easily and allowing it to be used with various CLP languages with simple modifications. As an example of this generality, we sketch the use of the analyser in two different applications involving two distinct CLP languages: an optimizing compiler for CLP(R) programs and an application for detecting occur-check problems in Prolog programs. © 1998 John Wiley & Sons Ltd. 相似文献
3.
Abstract: Function points have become an accepted measure of software size and are becoming an industry standard. However, the application of function point analysis is fairly complex and requires experience and a good understanding to apply it in a consistent manner. This paper describes the development of a knowledge-based, object-oriented system to assist an analyst in performing function point analysis. The objective of the function point analysis (FPA) tool is to allow an analyst to estimate system size in function points without having extensive training or experience using the function point method. The FPA tool uses information available in a functional specification that is a product of the requirements analysis phase of the software development life cycle. An object-oriented model was used to represent the functional requirements of a software system. 相似文献
4.
5.
One of object orientation's main limitations is the object-oriented analysis process's immaturity. This article proposes an approach that is based on using linguistic information from information specifications to apply during this process. Our method helps to analyze this information semantically and syntactically and employs a semi-formal procedure to extract an object-oriented system's components 相似文献
6.
In this study, we developed an object-oriented (OO) framework with interactive graphics to assist pavement studies using finite element analysis (FEA). FEA has been proven to be effective in studying various pavement failure problems; however, it is time consuming and error prone to manually generate the load sequences where non-regular tire footprints, non-uniform tire-pavement contact stresses, and transverse wheel wander distributions are used. After FEA, extracting the deformations for failure analysis is necessary but tedious. The OO framework developed in this study handles the preprocessing and postprocessing tasks for the FEA of pavements. It has a graphical user interface and is platform independent. It was successfully used in developing a new criterion for characterizing pavement failures that involved approximately four hundred different FEA simulations. 相似文献
7.
8.
Workflow management technology helps modulizing and controlling complex business processes within an enterprise. Generally speaking, a workflow management system (WfMS) is composed of two primary components, a design environment and a run-time system. Structural, timing and resource verifications of a workflow specification are required to assure the correctness of the specified system. In this paper, an incremental methodology is constructed to analyze resource consistency and temporal constraints after each edit unit defined on a workflow specification. The methodology introduces several algorithms for general and temporal analyses. The output returned right away can improve the judgment and thus the speed and quality on designing. 相似文献
9.
The adaptation of the Shlaer-Mellor object-oriented analysis method for engineering the requirements of a mission-planning system by a missile-guidance software group is discussed. The Shlaer-Mellor object-oriented analysis method provides a structured means of identifying objects within a system by analyzing abstract data types and uses these objects as a basis for building three formal models: information, state, and process. Several issues concerning the application of this method are discussed, including object selection and constraints, modeling and rotation, training, and CASE tool support 相似文献
10.
Abstract. Thesaurus is a collection of words classified according to some relatedness measures among them. In this paper, we lay the theoretical foundations of thesaurus construction through elementary meanings of words. The concept of elementary meanings has been advocated and utilized in compiling Webster's Collegiate Thesaurus. If each word is supplied with elementary meanings so that all its meanings are covered by them in a standard fashion, we can define various similarity measures for a given set of words. Here we take an axiomatic way to analyze semantic structure of word groups. Assuming an abstract semantic world, we deduce closed sets as generalized synonym sets. That is, we show that under certain natural axioms, we only need to consider closed sets as far as the semantics are concerned. We also show that the set of generalized synonyms described as a certain pair of closed sets has a lattice structure. In order to have a flexible thesaurus, we also analyze structure changes corresponding to three basic environmental changes: A new word-meaning relation is added, a new word or a new meaning is included with its word-meaning relations. Actually we give algorithms to have updated lattice structure from previous one for the three operations. Received: 5 May 1996 / 21 February 2000 相似文献
11.
A major purpose of analysis is to represent precisely all relevant facts, as they are observed in the external world. A substantial problem in object-oriented analysis is that most modelling languages are more suitable to build computational models than to develop conceptual models. It is a rather blind assumption that concepts that are convenient for design can also be applied during analysis. Preconditions, postconditions and invariants are typical examples of concepts with blurred semantics. At the level of analysis they are most often used to specify business rules. This paper introduces proper concepts for modelling business rules and specifies their semantics. 相似文献
12.
Elvira AlbertPuri Arenas Samir GenaimGerman Puebla Damiano Zanardini 《Theoretical computer science》2012,413(1):142-159
Cost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form. 相似文献
13.
Femke Reitsma John Laxton Stuart Ballard Werner Kuhn Alia Abdelmoty 《Computers & Geosciences》2009,35(4):706-709
Semantics, ontologies and eScience are key areas of research that aim to deal with the growing volume, number of sources and heterogeneity of geoscience data, information and knowledge. Following a workshop held at the eScience Institute in Edinburgh on the 7–9th of March 2008, this paper discusses some of the significant research topics and challenges for enhancing geospatial computing using semantic and grid technologies. 相似文献
14.
F. Modugno N. G. Leveson J. D. Reese K. Partridge S. D. Sandys 《Requirements Engineering》1997,2(2):65-78
This paper describes an integrated approach to safety analysis of software requirements and demonstrates the feasibility and
utility of applying the individual techniques and the integrated approach on the requirements specification of a guidance
system for a high-speed civil transport being developed at NASA Ames. Each analysis found different types of errors in the
specification; thus together the techniques provided a more comprehensive safety analysis than any individual technique. We
also discovered that the more the analyst knew about the application and the model, the more successful they were in finding
errors. Our findings imply that the most effective safety-analysis tools will assist rather than replace the analyst.
A shorter version of this paper appeared in the Proceedings of the 3rd International Symposium on Requirements Engineering,
Annapolis, Maryland, January 1997. The research described has been partly funded by NASA/Langley Grant NAG-1-1495, NSF Grant
CCR-9396181, and the California PATH Program of the University of California 相似文献
15.
Michael Wahler David Basin Achim D. Brucker Jana Koehler 《Software and Systems Modeling》2010,9(2):225-255
Precision and consistency are important prerequisites for class models to conform to their intended domain semantics. Precision
can be achieved by augmenting models with design constraints and consistency can be achieved by avoiding contradictory constraints.
However, there are different views of what constitutes a contradiction for design constraints. Moreover, state-of-the-art
analysis approaches for proving constrained models consistent either scale poorly or require the use of interactive theorem
proving. In this paper, we present a heuristic approach for efficiently analyzing constraint specifications built from constraint
patterns. This analysis is based on precise notions of consistency for constrained class models and exploits the semantic
properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time. We introduce a consistency
checker implementing these ideas and we report on case studies in applying our approach to analyze industrial-scale models.
These studies show that pattern-based constraint development supports the creation of concise specifications and provides
immediate feedback on model consistency. 相似文献
16.
Henry E. Kyburg 《Computational Intelligence》1997,13(2):215-257
In ordinary first–order logic, a valid inference in a language L is one in which the conclusion is true in every model of the language in which the premises are true. To accommodate inductive/uncertain/probabilistic/nonmonotonic inference, we weaken that demand to the demand that the conclusion be true in a large proportion of the models in which the relevant premises are true. More generally, we say that an inference is [p,q] valid if its conclusion is true in a proportion lying between p and q of those models in which the relevant premises are true. If we include a statistical variable binding operator "%" in our language, there are many quite general (and useful) things we can say about uncertain validity. A surprising result is that some of these things may conflict with Bayesian conditionalization. 相似文献
17.
面向对象设计复杂性度量计算与分析 总被引:1,自引:0,他引:1
软件维护在软件开发过程中占有相当重要的位置,但是其成本往往是很难控制的.在对一种能够对软件维护性能进行预测的面向对象复杂性度量进行分析的基础上,结合实际情况分析了该算法集3个度量的缺点及联系.对于3个度量不够深入的地方补充了一些新观点. 相似文献
18.
19.
Chidamber S.R. Darcy D.P. Kemerer C.F. 《IEEE transactions on pattern analysis and machine intelligence》1998,24(8):629-639
With the increasing use of object-oriented methods in new software development, there is a growing need to both document and improve current practice in object-oriented design and development. In response to this need, a number of researchers have developed various metrics for object-oriented systems as proposed aids to the management of these systems. In this research, an analysis of a set of metrics proposed by Chidamber and Kemerer (1994) is performed in order to assess their usefulness for practising managers. First, an informal introduction to the metrics is provided by way of an extended example of their managerial use. Second, exploratory analyses of empirical data relating the metrics to productivity, rework effort and design effort on three commercial object-oriented systems are provided. The empirical results suggest that the metrics provide significant explanatory power for variations in these economic variables, over and above that provided by traditional measures, such as size in lines of code, and after controlling for the effects of individual developers 相似文献
20.
Tactics based approach for integrating non-functional requirements in object-oriented analysis and design 总被引:1,自引:0,他引:1
Non-Functional Requirements (NFRs) are rarely treated as “first-class” elements in software development as Functional Requirements (FRs) are. Often NFRs are stated informally and incorporated in the final software as an after-thought. We leverage existing research work for the treatment of NFRs to propose an approach that enables to systematically analyze and design NFRs in parallel with FRs. Our approach premises on the importance of focusing on tactics (the specific mechanisms used to fulfill NFRs) as opposed to focusing on NFRs themselves. The advantages of our approach include filling the gap between NFRs elicitation and NFRs implementation, systematically treating NFRs through grouping of tactics so that tactics in the same group can be addressed uniformly, remedying some shortcomings in existing work (by prioritizing NFRs and analyzing tradeoff among NFRs), and integration of FRs and NFRs by treating them as first-class entities. 相似文献