首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
事务存储并行程序编程接口研究   总被引:1,自引:0,他引:1       下载免费PDF全文
事务存储并行程序编程接口按照实现方式和实现层次的不同,分为三种形式:库函数接口、语言扩展和编译器指导命令。本文以RSTM、英特尔C/C++软件事务存储编译器原型和OpenTM为例,讨论了三种事务存储编程接口的特点,对OpenTM编程接口进行了扩展和完善,并对未来编程接口的发展进行了展望。  相似文献   

2.
基于语义的QoS感知Web服务发现机制   总被引:1,自引:1,他引:0  
随着提供相同功能的Web服务数量的日益增多,服务质量(Quality of Service,QoS)成为用户选择Web服务的重要考虑因素.目前,通过对服务QoS属性在语法层匹配来提供Web服务选取的机制不能很好地满足复杂QoS属性匹配的要求.研究了基于用户QoS需求偏好,将用户需求的QoS与候选服务的QoS进行语义比较,结合约束规划(Constraint Programming)方法,在语义层匹配Web服务的QoS属性,选取满足匹配要求的服务,最后对满足QoS属性值约束的候选服务进行优化选择处理,获取最终匹配的候选服务.  相似文献   

3.
4.
Abductive inferences are commonplace during natural language processing. Having identified some limitations of an existing parsimonious covering model of abductive diagnostic inference, we developed an extended, dual-route version to address issues in word sense disambiguation and logical form generation. the details of representing knowledge in this framework and the syntactic route of covering are described in a companion article [V. Dasigi, Int. J. Intell. Syst., 9 , 571-608 (1994)]. Here, we describe the semantic covering process in detail. A dual-route algorithm that integrates syntactic and semantic covering is given. Taking advantage of the “transitivity” of irredundant syntactic covering, plausible semantic covers are searched for, based on some heuristics, in the space of irredundant syntactic covers. Syntactic covering identifies all possible candidates for semantic covering, which in turn, helps focus syntactic covering. Attributing both syntactic and semantic facets to “open-class” linguistic concepts makes this integration possible. an experimental prototype has been developed to provide a proof-of-concept for these ideas in the context of expert system interfaces. the prototype has at least some ability to handle ungrammatical sentences, to perform some nonmonotonic inferences, etc. We believe this work provides a starting point for a nondeductive inference method for logical form generation, exploiting the associative linguistic knowledge. © 1994 John Wiley & Sons, Inc.  相似文献   

5.
This paper presents a cognitive framework for describing behaviors involved in program composition, comprehension, debugging, modification, and the acquisition of new programming concepts, skills, and knowledge. An information processing model is presented which includes a long-term store of semantic and syntactic knowledge, and a working memory in which problem solutions are constructed. New experimental evidence is presented to support the model of syntactic/semantic interaction.  相似文献   

6.
Constraints are an effective tool to define sets of data by means of logical formulae. Our goal here is to survey the notion of constraint system and to give examples of constraint systems operating on various domains, such as natural, rational or real numbers, finite domains, and term domains. We classify the different methods used for solving constraints, syntactic methods based on transformations, semantic methods based on adequate representations of constraints, hybrid methods combining transformations and enumerations. The concepts and methods are illustrated via examples. We also discuss applications of constraints to various fields, such as programming, operations research, and theorem proving.  相似文献   

7.
8.
The basic logic programming semantic concepts, query, solutions, solution forms, and the fundamental results such as Herbrand theorems, are developed over any logical system, formalised as institution, by employing ‘institution-independent’ concepts of variable, substitution, quantifier, and atomic formulae. This sets semantical foundations for a uniform development of logic programming over a large variety of computing science logics, allowing for a clean combination of logic programming with other computing paradigms.  相似文献   

9.
10.
Michael May 《Knowledge》2001,14(8):431-435
This paper will briefly present a semiotic approach to instrument interfaces based on a conceptual analysis of display and control components and their compositional semantics. Display and control components can be considered as prototypical objects that are themselves constructed from combinations of more elementary signs expressed in some combination of media. Different types of signs (or ‘representational modalities’) and different types of media have different semantic and syntactic properties. These properties will to some extent determine what different combinations of media and signs are good for, i.e. what kinds of information can be adequately expressed in a given media–modality combination, and what kinds of cognitive support it will give to the social agents using it in a work context. The relevance of such an approach is partly in enhancing our understanding of instrument interfaces and human–machine interaction in complex work domains and partly to support the design and development of flexible and tailorable instrument interfaces.  相似文献   

11.
12.
A region calculus is a programming language calculus with explicit instrumentation for memory management. Every value is annotated with a region in which it is stored and regions are allocated and deallocated in a stack-like fashion. The annotations can be statically inferred by a type and effect system, making a region calculus suitable as an intermediate language for a compiler of statically typed programming languages.Although a lot of attention has been paid to type soundness properties of different flavors of region calculi, it seems that little effort has been made to develop a semantic framework. In this paper, we present a theory based on bisimulation, which serves as a coinductive proof principle for showing equivalences of polymorphically region-annotated terms. Our notion of bisimilarity is reminiscent of open bisimilarity for the -calculus and we prove it sound and complete with respect to Morris-style contextual equivalence.As an application, we formulate a syntactic equational theory, which is used elsewhere to prove the soundness of a specializer based on region inference. We use our bisimulation framework to show that the equational theory is sound with respect to contextual equivalence.  相似文献   

13.
Lisp applications need to show a reasonable cost-benefit relationship between the offered expressiveness and their demand for storage and run-time. Drawbacks in efficiency, apparent inLisp as a dynamically typed programming language, can be avoided by optimizations. Statically inferred type information can be decisive for the success of these optimizations.This paper describes a practical approach to type inference realized in a module and application compiler forEuLisp. The approach is partly related to Milner-style polymorphic type inference, but differs by describing functions withgeneric type schemes. Dependencies between argument and result types can be expressed more precisely by using generic type schemes of several lines than by using the common one-line type schemes. Generic type schemes contain types of a refined complementary lattice and bounded type variables. Besides standard and defined types so-calledstrategic types (e.g. singleton, zero, number-list) are combined into the type lattice. Local, global and control flow inference using generic type schemes with refined types generate precise typings of defined functions. Due to module compilation, inferred type schemes of exported functions can be stored in export interfaces, so they may be reused when imported elsewhere.This work was supported by the German Federal Ministry for Research and Technology (BMFT) within the joint project APPLY. The partners in this project are the Christian Albrechts University Kiel, the Fraunhofer Institute for Software Engineering and Systems Engineering (ISST), the German National Research Centre for Computer Science (GMD), and VW-GEDAS.  相似文献   

14.
In the traditional programming paradigm, data structures and algorithms are developed for specific data types and requirements. This leads to code redundancy and inflexibility, thus not allowing effective code reuse for similar applications. One effective approach to increase code reuse is generic programming, which focuses on the development of efficient, reusable software libraries through suitable abstractions for the common requirements. In this paper, we present how we applied generic programming to an ongoing effort for mesh-based adaptive simulations on massively parallel computers. Three generic components, iterator, set and tag, were developed using design pattern, C++ template programming and the standard template library. The scaling studies on petascale supercomputers demonstrate the efficiency of the reusable, generic components which do not sacrifice the performance of the previous tools developed in the traditional object-oriented programming paradigm.  相似文献   

15.
专用处理器,如DSP等,因主要支持特定应用,其指令集往往只支持有限的数据类型。在采用高级语言为其编程时,若采用了处理器不支持的奇异数据类型,编译器必须在保持语义的前提下将其转化为处理器支持的一段指令。该文提出了一种在VLIW DSP编译器中实现对奇异数据类型的处理的方法,包括对含有奇异数据类型的中间代码的注释、调度依赖关系的计算、寄存器分配的改进。该类方法对编译器的改动相对较小,效率较高。  相似文献   

16.
Providing runtime information about generic types–that is, reifying generics–is a challenging problem studied in several research papers in the last years. This problem is not tackled in current version of the Java programming language (Java 6), which consequently suffers from serious safety and coherence problems. The quest for finding effective and efficient solutions to this problem is still open, and is further made more complicated by the new mechanism of wildcards introduced in Java J2SE 5.0: its reification aspects are currently unexplored and pose serious semantics and implementation issues.In this paper, we discuss an implementation support for wildcard types in Java. We first analyse the problem from an abstract viewpoint, discussing the issues that have to be faced in order to extend an existing reification technique so as to support wildcards, namely, subtyping, capture conversion and wildcards capture in method calls. Secondly, we present an implementation in the context of the EGO compiler. EGO is an approach for efficiently supporting runtime generics at compile-time: synthetic code is automatically added to the source code by the extended compiler, so as to create generic runtime type information on a by need basis, store it into object instances, and retrieve it when necessary in type-dependent operations. The solution discussed in this paper makes the EGO compiler the first reification approach entirely dealing with the present version of the Java programming language.  相似文献   

17.
Traits, as sets of behaviors, can provide a good mechanism for reusability. However, they are limited in important ways and are not present in widely used programming and modeling languages and hence are not readily available for use by mainstream developers. In this paper, we add UML associations and other modeling concepts to traits and apply them to Java and C++ through model-driven development. We also extend traits with required interfaces so dependencies at the semantics level become part of their usage, rather than simple syntactic capture. All this is accomplished in Umple, a textual modeling language based upon UML that allows adding programming constructs to the model. We applied the work to two case studies. The results show that we can promote traits to the modeling level along with the improvement in flexibility and reusability.  相似文献   

18.
Konrad Zuse was the first person in history to build a working digital computer, a fact that is still not generally acknowledged. Even less known is that in the years 1943-1945, Zuse developed a high-level programming model and, based on it, an algorithmic programming language called Plankalku¨l (Plan Calculus). The Plankalku¨l features binary data structure types, thus supporting a loop-free programming style for logical or relational problems. As a language for numerical applications, Plankalku¨l already had the essential features of a “von Neumann language”, though at the level of an operator language. Consequently, the Plankalku¨l is in some aspects equivalent and in others more powerful than the von Neumann programming model that came to dominate programming for a long time. To find language concepts similar to those of the Plankalku¨l, one has to look at “non-von Neumann languages” such as APL or the relational algebra. This paper conveys the syntactic and semantic flavor of the Plankalku¨l, without presenting all its syntactic idiosyncrasies. Rather, it points out that the Plankalku¨l was not only the first high-level programming language but in some aspects conceptually ahead of the high-level languages that evolved a decade later  相似文献   

19.
Although modularisation is basic to modern computing, it has been little studied for logic-based programming. We treat modularisation for equational logic programming using the institution of category-based equational logic in three different ways: (1) to provide a generic satisfaction condition for equational logics; (2) to give a category-based semantics for queries and their solutions; and (3) as an abstract definition of compilation from one (equational) logic programming language to another. Regarding (2), we study soundness and completeness for equational logic programming queries and their solutions. This can be understood as ordinary soundness and completeness in a suitable “non-logical” institution. Soundness holds for all module imports, but completeness only holds for conservative module imports. Category-based equational signatures are seen as modules, and morphisms of such signatures as module imports. Regarding (3), completeness corresponds to compiler correctness. The results of this research applies to languages based on a wide class of equational logic systems, including Horn clause logic, with or without equality; all variants of order and many sorted equational logic, including working modulo a set of axioms; constraint logic programming over arbitrary user-defined data types; and any combination of the above. Most importantly, due to the abstraction level, this research gives the possibility to have semantics and to study modularisation for equational logic programming developed over non-conventional structures. Received April 15, 1994/April 12, 1995  相似文献   

20.
Compiler support for intervals as intrinsic data types is essential for promoting the development and wide-spread use of interval software. It also plays an important role in encouraging the development of hardware support for interval arithmetic. This paper describes modifications made to the GNU Fortran Compiler to provide support for interval arithmetic. These modifications are based on a recently proposed Fortran 77 Interval Arithmetic Specification, which provides a standard for supporting interval arithmetic in Fortran. This paper also describes the design of the compiler's interval runtime libraries and the methodology used to test the compiler. The compiler and runtime libraries are designed to be portable to platforms that support the IEEE 754 floating point standard.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号