首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
王飞  易绵竹  谭新 《计算机科学》2018,45(Z6):101-105
传统的知识表示存在涵盖知识面不够和语义形式化描述不够全面的问题,导致计算机理解自然语言不够准确。受大脑神经元工作原理的启发,从语义剖析的角度出发,基于本体语义,在概念和词汇两个层次构建了本体语义网,使其具有神经网络的特性,既能准确理解文本语义,刻画词在不同领域内的不同含义,又涵盖了文本生成过程中的语义组合特点。为使模型进一步形式化,采用矩阵的方式表示,并用奇异值分解来降低矩阵规模复杂度,以便于描述词汇与概念之间的关系。  相似文献   

3.
Much recent research has focused on applying Autonomic Computing principles to achieve constrained self-management in adaptive systems, through self-monitoring and analysis, strategy planning, and self adjustment. However, in a highly distributed system, just monitoring current operation and context is a complex and largely unsolved problem domain. This difficulty is particularly evident in the areas of network management, pervasive computing, and autonomic communications. This paper presents a model for the filtered dissemination of semantically enriched knowledge over a large loosely coupled network of distributed heterogeneous autonomic agents, removing the need to bind explicitly to all of the potential sources of that knowledge. This paper presents an implementation of such a knowledge delivery service, which enables the efficient routing of distributed heterogeneous knowledge to, and only to, nodes that have expressed an interest in that knowledge. This gathered knowledge can then be used as the operational or context information needed to analyze to the system's behavior as part of an autonomic control loop. As a case study this paper focuses on contextual knowledge distribution for autonomic network management. A comparative evaluation of the performance of the knowledge delivery service is also provided. John Keeney holds a BAI degree in Computer Engineering and a PhD in Computer Science from Trinity College Dublin. His primary interests are in controlling autonomic adaptable systems, particularly when those systems are distributed. David Lewis graduated in Electronics Engineering from the University of Southampton and gained his PhD in Computer Science from University College London. His areas of interest include integrated network and service management, distributed system engineering, adaptive and autonomic systems, semantic services and pervasive computing. Declan O’Sullivan was awarded his primary degree, MSc and PhD in Computer Science from Trinity College Dublin. He has a particular interest in the issues of semantic interoperability and heterogeneous information querying within a range of areas, primarily network and service management, autonomic management, and pervasive computing.  相似文献   

4.
5.
6.
This is a continuation of our previous results (Y. Watanabe, N. Yamamoto, T. Nakao, and T. Nishida, A Numerical Verification of Nontrivial Solutions for the Heat Convection Problem, to appear in the Journal of Mathematical Fluid Mechanics). In that work, the authors considered two-dimensional Rayleigh-Bénard convection and proposed an approach to prove existence of steady-state solutions based on an infinite dimensional fixed-point theorem using a Newton-like operator with spectral approximation and constructive error estimates. We numerically verified several exact nontrivial solutions which correspond to solutions bifurcating from the trivial solution. This paper shows more detailed results of verification for given Prandtl and Rayleigh numbers. In particular, we found a new and interesting solution branch which was not obtained in the previous study, and it should enable us to present important information to clarify the global bifurcation structure. All numerical examples discussed are take into account of the effects of rounding errors in the floating point computations.  相似文献   

7.
In ordinary first–order logic, a valid inference in a language L is one in which the conclusion is true in every model of the language in which the premises are true. To accommodate inductive/uncertain/probabilistic/nonmonotonic inference, we weaken that demand to the demand that the conclusion be true in a large proportion of the models in which the relevant premises are true. More generally, we say that an inference is [p,q] valid if its conclusion is true in a proportion lying between p and q of those models in which the relevant premises are true. If we include a statistical variable binding operator "%" in our language, there are many quite general (and useful) things we can say about uncertain validity. A surprising result is that some of these things may conflict with Bayesian conditionalization.  相似文献   

8.
Many generic constructions for building secure cryptosystems from primitives with lower level of security have been proposed. Providing security proofs has also become standard practice. There is, however, a lack of automated verification procedures that analyze such cryptosystems and provide security proofs. In this paper, we present a sound and automated procedure that allows us to verify that a generic asymmetric encryption scheme is secure against chosen-plaintext attacks in the random oracle model. It has been applied to several examples of encryption schemes among which the construction of Bellare?CRogaway 1993, of Pointcheval at PKC??2000.  相似文献   

9.
This paper presents a project whose main objective is to explore the ontological based development of Domain Specific Languages (DSL), more precisely, of their underlying Grammar. After reviewing the basic concepts characterizing Ontologies and DSLs, we introduce a tool, Onto2Gra, that takes profit of the knowledge described by the ontology and automatically generates a grammar for a DSL that allows to discourse about the domain described by that ontology. This approach represents a rigorous method to create, in a secure and effective way, a grammar for a new specialized language restricted to a concrete domain. The usual process of creating a grammar from the scratch is, as every creative action, difficult, slow and error prone; so this proposal is, from a grammar engineering point of view, of uttermost importance. After the grammar generation phase, the Grammar Engineer can manipulate it to add syntactic sugar to improve the final language quality or even to add specific semantic actions. The Onto2Gra project is composed of three engines. The main one is OWL2DSL, the component that converts an OWL ontology into a complete Attribute Grammar for the construction of an internal representation of all the input data. The two additional modules are Onto2OWL, converts ontologies written in OntoDL into standard OWL, and DDesc2OWL, converts domain instances written in the new DSL into the initial OWL ontology.  相似文献   

10.
In this paper we classify several algorithmic problems in group theory in the complexity classes PZK and SZK (problems with perfect/statistical zero-knowledge proofs respectively). Prior to this, these problems were known to be in . As , we have a tighter upper bound for these problems. Specifically:
•  We show that the permutation group problems Coset Intersection, Double Coset Membership, Group Conjugacy are in PZK. Further, the complements of these problems also have perfect zero knowledge proofs (in the liberal sense). We also show that permutation group isomorphism for solvable groups is in PZK. As an ingredient of this protocol, we design a randomized algorithm for sampling short presentations of solvable permutation groups.
•  We show that the complement of all the above problems have concurrent zero knowledge proofs.
•  We prove that the above problems for black-box groups are in SZK.
•  Finally, we also show that some of the problems have SZK protocols with efficient provers in the sense of Micciancio and Vadhan (Lecture Notes in Comput. Sci. 2729, 282–298, 2003).
  相似文献   

11.
The Foundation for Intelligent Physical Agents (FIPA) standardisation body has produced a set of specifications outlining a generic model for the architecture and operation of agent-based systems. The FIPA'97 Specification Part 2 is the normative specification of an Agent Communication Language (ACL) which agents use to talk to each other. The FIPA ACL is based on speech act theory. Its syntax is defined by performatives parameterised by attribute value pairs, while its semantics is given in terms of the mental states of the communicating agents (i.e. intentionality). However, it is not clear if the formal semantics is meant as a normative or informative specification. The primary purpose of this paper is then to give an expository analysis of the FIPA ACL semantics to clarify this situation. We also offer some guidelines motivated from our own analysis, experience and understanding of how the semantic definitions and logical axioms should be interpreted and applied. However, our conclusion is that while the FIPA ACL specification offers significant potential to a developer using it for guidance, there are limitations on using an agent's mental state to specify the meaning of a performative as part of a normative standard. We consider some possibilities for making improvements in this direction.  相似文献   

12.
The research reported in this article was spawned by a colleague's request to find an elegant proof (of a theorem from Boolean algebra) to replace his proof consisting of 816 deduced steps. The request was met by finding a proof consisting of 100 deduced steps. The methodology used to obtain the far shorter proof is presented in detail through a sequence of experiments. Although clearly not an algorithm, the methodology is sufficiently general to enable its use for seeking elegant proofs regardless of the domain of study. In addition to (usually) being more elegant, shorter proofs often provide the needed path to constructing a more efficient circuit, a more effective algorithm, and the like. The methodology relies heavily on the assistance of McCune's automated reasoning program OTTER. Of the aspects of proof elegance, the main focus here is on proof length, with brief attention paid to the type of term present, the number of variables required, and the complexity of deduced steps. The methodology is iterative, relying heavily on the use of three strategies: the resonance strategy, the hot list strategy, and McCune's ratio strategy. These strategies, as well as other features on which the methodology relies, do exhibit tendencies that can be exploited in the search for shorter proofs and for other objectives. To provide some insight regarding the value of the methodology, I discuss its successful application to other problems from Boolean algebra and to problems from lattice theory. Research suggestions and challenges are also offered.  相似文献   

13.
14.
Ontological fuzzy agent for electrocardiogram application   总被引:1,自引:0,他引:1  
The electrocardiogram (ECG) signal is adopted extensively as a low-cost diagnostic procedure to provide information concerning the healthy status of the heart. However, the QRS complex must be calculated accurately before proceeding with the heart rate variability (HRV). In particular, the R peak needs to be detected reliably. This study presents an adaptive fuzzy detector to detect the R peak correctly. Additionally, an ontological fuzzy agent is presented to process the collection of ECG signals. The required knowledge is stored in the ontology, which comprises some personal ontologies and predefined by domain experts. The ontological fuzzy agent retrieves the ECG signals with R peaks marked for HRV analysis and ECG further applications. It contains a personal fuzzy filter, an HRV analysis mechanism, and a fuzzy normed inference engine. Moreover, the ECG fuzzy signal space and some important properties are presented to define the working environment of the agent. An experimental platform has been constructed to test the performance of the agent. The results indicate that the proposed method can work effectively.  相似文献   

15.
The analysis of application for software development is usually guided and, hence, constrained by concepts and constructs of structured and object-oriented programming languages. It produces an analysis model that bears no essential difference from a software design. Here, we introduce an ontological Weltanschauung in application analysis for understanding essentials and novelties, which are obscured in structured and object-oriented methods of application analysis. The ontological viewpoint is reified with a systemic framework, or simply an ontology, which can be related to software design. Thus, we can apply ontological analysis in software development.  相似文献   

16.
This paper proposes to specify semantic definitions for logic programming languages such as Prolog in terms of an oracle which specifies the control strategy and identifies which clauses are to be applied to resolve a given goal. The approach is quite general. It can be applied to Prolog to specify both operational and declarative semantics as well as other logic programming languages. Previous semantic definitions for Prolog typically encode the sequential depth-first search of the language into various mathematical frameworks. Such semantics mimic a Prolog interpreter in the sense that following the "leftmost" infinite path in the computation tree excludes computation to the right of this path from being considered by the semantics. The basic idea in this paper is to abstract away from the sequential control of Prolog and to provide a declarative characterization of the clauses to apply to a given goal. The decision whether or not to apply a clause is viewed as a query to an oracle which is specified from within the semantics and reasoned about from outside. This approach results in simple and concise semantic definitions which are more useful for arguing the correctness of program transformations and providing the basis for abstract interpretations than previous proposals.  相似文献   

17.
Resultants Semantics for Prolog   总被引:1,自引:0,他引:1  
  相似文献   

18.
19.
This paper describes our experience using the interactive theorem prover Athena for proving the correctness of abstract interpretation-based dataflow analyses. For each analysis, our methodology requires the analysis designer to formally specify the property lattice, the transfer functions, and the desired modeling relation between the concrete program states and the results computed by the analysis. The goal of the correctness proof is to prove that the desired modeling relation holds. The proof allows the analysis clients to rely on the modeling relation for their own correctness. To reduce the complexity of the proofs, we separate the proof of each dataflow analysis into two parts: a generic part, proven once, independent of any specific analysis; and several analysis-specific conditions proven in Athena.  相似文献   

20.
We describe novel computational techniques for constructing induction rules for deductive synthesis proofs. Deductive synthesis holds out the promise of automated construction of correct computer programs from specifications of their desired behaviour. Synthesis of programs with iteration or recursion requires inductive proof, but standard techniques for the construction of appropriate induction rules are restricted to recycling the recursive structure of the specifications. What is needed is induction rule construction techniques that can introduce novel recursive structures. We show that a combination of rippling and the use of meta-variables as a least-commitment device can provide such novelty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号