首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Constructing knowledge systems is viewed as a modeling activity for developing structured knowledge and reasoning models. To ensure well-formed models, the use of some knowledge engineering methodology is crucial. Additionally, reusing models can significantly reduce the time and costs of building a new application. Reusing knowledge components across different applications and domains can help acquire expert knowledge and accurately describe the reasoning process. In fact, current knowledge engineering research has taken major initiatives in the development of knowledge systems by reusing generic components, such as ontologies or problem-solving methods. The article shows how we developed a diagnosis-aid system by reusing and adapting genetic knowledge components for diagnosing eye emergencies.  相似文献   

2.
We present a computing environment for origami on the web. The environment consists of the computational origami engine Eos for origami construction, visualization, and geometrical reasoning, WebEos for providing web interface to the functionalities of Eos, and web service system Scorum for symbolic computing web services. WebEos is developed using Web2.0 technologies, and provides a graphical interactive web interface for origami construction and proving. In Scorum, we are preparing web services for a wide range of symbolic computing systems, and are using these services in our origami environment. We explain the functionalities of this environment, and discuss its architectural and technological features.  相似文献   

3.
4.
This paper addresses the specification of and reasoning about interactive real-time systems, their interfaces, and architectures as well as their properties in terms of assumptions and commitments. Specifications are structured into assumptions restricting the behavior of the operational context of systems and commitments about the system behavior (also called rely/guarantee or assumption/promise specification patterns in the literature). A logical approach to assumption/commitment contracts is worked out based on a mathematical system model:
  • From assumption/commitment contracts plain interface assertions for the system are derived.
  • Healthiness conditions based on the system model are worked out for assumptions.
  • Safety and liveness properties for assumption/commitment contracts are identified.
  • From interaction specifications describing the interaction between two systems assumption/commitment contracts for the involved systems are derived.
  • Contracts for components in architectures are formulated in terms of assumptions and commitments and conditions are worked out to guarantee that assumptions for the composite systems guarantee the validity of the assumptions for components.
Based on the theoretical foundation architectural issues are considered for a systematic use of assumption/commitment patterns in system specification and architecture design.
  相似文献   

5.
CASE-BASED REASONING   总被引:1,自引:0,他引:1  
Case-based reasoning (CBR) is an alternative to traditional rule-based expert systems approaches. The fact that rules can be incomplete and must be assembled through a time-consuming knowledge acquisition process has been a major bottleneck in the usefulness of rule-based methods for configuring information systems. Case-based reasoning systems, by contrast, access problem-solving experiences or cases for the next problem-solving situation. The CBR process, some commercial shells, and CBR's use to augment expert systems applications are discussed.  相似文献   

6.
Verification of clocked and hybrid systems   总被引:2,自引:0,他引:2  
This paper presents a new computational model for real-time systems, called the clocked transition system (CTS) model. The CTS model is a development of our previous timed transition model, where some of the changes are inspired by the model of timed automata. The new model leads to a simpler style of temporal specification and verification, requiring no extension of the temporal language. We present verification rules for proving safety a nd liveness properties of clocked transition systems. All rules are associated with verification diagrams. The verification of response properties requires adjustments of the proof rules developed for untimed systems, reflecting the fact that progress in the real time systems is ensured by the progress of time and not by fairness. The style of the verification rules is very close to the verification style of untimed systems which allows the (re)use of verification methods and tools, developed for u ntimed reactive systems, for proving all interesting properties of real-time systems. We conclude with the presentation of a branching-time based approach for verifying that an arbitrary given CTS isnon-zeno. Finally, we present an extension of the model and the invariance proof rule for hybrid systems. Received: 23 September 1998 / 7 June 1999  相似文献   

7.
Compositional reasoning aims to improve scalability of verification tools by reducing the original verification task into subproblems. The simplification is typically based on assume-guarantee reasoning principles, and requires user guidance to identify appropriate assumptions for components. In this paper, we propose a fully automated approach to compositional reasoning that consists of automated decomposition using a hypergraph partitioning algorithm for balanced clustering of variables, and discovering assumptions using the L * algorithm for active learning of regular languages. We present a symbolic implementation of the learning algorithm, and incorporate it in the model checker NuSmv. In some cases, our experiments demonstrate significant savings in the computational requirements of symbolic model checking. This research was partially supported by ARO grant DAAD19-01-1-0473, and NSF grants ITR/SY 0121431 and CCR0306382.  相似文献   

8.
: A neural architecture, based on several self-organising maps, is presented which counteracts the parameter drift problem for an array of conducting polymer gas sensors when used for odour sensing. The neural architecture is named mSom, where m is the number of odours to be recognised, and is mainly constituted of m maps; each one approximates the statistical distribution of a given odour. Competition occurs both within each map and between maps for the selection of the minimum map distance in the Euclidean space. The network (mSom) is able to adapt itself to new changes of the input probability distribution by repetitive self-training processes based on its experience. This architecture has been tested and compared with other neural architectures, such as RBF and Fuzzy ARTMAP. The network shows long-term stable behaviour, and is completely autonomous during the testing phase, where re-adaptation of the neurons is needed due to the changes of the input probability distribution of the given data set. Received: 23 November 2000, Received in revised form: 01 June 2001, Accepted: 23 July 2001  相似文献   

9.
We have developed and implemented the Relational Grid Monitoring Architecture (R-GMA) as part of the DataGrid project, to provide a flexible information and monitoring service for use by other middleware components and applications.R-GMA presents users with a virtual database and mediates queries posed at this database: users pose queries against a global schema and R-GMA takes responsibility for locating relevant sources and returning an answer. R-GMAs architecture and mechanisms are general and can be used wherever there is a need for publishing and querying information in a distributed environment.We discuss the requirements, design and implementation of R-GMA as deployed on the DataGrid testbed. We also describe some of the ways in which R-GMA is being used.L. Field: Now at CERN, Switzerland.J. Leake: Under contract from Objective Engineering Ltd.  相似文献   

10.
Pattern-directed invocation is a commonly used artifical-intelligence reasoning technique in which a procedure, called a demon, is automatically invoked whenever a term matching its pattern appears in a ground data base. For completeness, if the data base includes equations, a demon needs to be invoked not only when a term in the data base exactly matches its pattern, but also when some variant of a term in the data base matches. An incremental algorithm has been developed for invoking demons in this situation without generating all possible variants of terms in the data base. The algorithm is shown to be complete for a class of demons, called transparent demons, that obey a natural restriction between the pattern and the body of the demon. Completeness is maintained when new demons, terms, or equations are added to the data base in any order. Equations can also be retracted via a truth maintenance system. The algorithm has been implemented as part of a reasoning system called bread.  相似文献   

11.
TheMuscadet theorem prover is a knowledge-based system able to prove theorems in some non-trivial mathematical domains. The knowledge bases contain some general deduction strategies based onnatural deduction, mathematical knowledge and metaknowledge. Metarules build new rules, easily usable by the inference engine, from formal definitions. Mathematical knowledge may be general or specific to some particular field.Muscadet proved many theorems in set theory, mappings, relations, topology, geometry, and topological linear spaces. Some of the theorems were rather difficult.Muscadet is now intended to become an assistant for mathematicians in discrete geometry for cellular automata. In order to evaluate the difficulty of such a work, researchers were observed while proving some lemmas, andMuscadet was tested on easy ones. New methods have to be added to the knowledge base, such as reasoning by induction, but also new heuristics for splitting and reasoning by cases. It is also necessary to find good representations for some mathematical objects.  相似文献   

12.
The research field of inductive programming is concerned with the design of algorithms for learning computer programs with complex flow of control (typically recursive calls) from incomplete specifications such as examples. We introduce a basic algorithmic approach for inductive programming and illustrate it with three systems: dialogs learns logic programs by combining inductive and abductive reasoning; the classical thesys system and its extension igor1 learn functional programs based on a recurrence detection mechanism in traces; igor2 learns functional programs over algebraic data-types making use of constructor-term rewriting systems. Furthermore, we give a short history of inductive programming, discuss related approaches, and give hints about current applications and possible future directions of research. A short, non-technical version of this paper appears in C. Sammut, editor, Encyclopedia of Machine Learning, Springer–Verlag, forthcoming. The paper was written while the first author was on sabbatical in 2006/2007 at Sabancı University in İstanbul, Turkey.  相似文献   

13.
14.
Modern information systems rely more and more on combining concurrent, distributed, mobile and heterogenous components. This move from old systems, typically conceived in isolation, induces the need for new languages and software architectures. In particular, coordination languages have been proposed to cleanly separate computational aspects and communication. On the other hand, software architects face the problem of specifying and reasoning about non-functional requirements. All these issues are widely perceived as fundamental to improve software productivity, to enhance maintainability, to advocate modularity, to promote reusability, and to lead to systems more tractable and more amenable to verification and global analysis.The FOCLASA workshop was organized on August 24th 2002 as a satellite event of Concur'02 with the aim of bringing together researchers working on the foundations of component-based computing, coordination, and software architectures.This volume contains selected 12 papers presented at the workshop. They were reviewed by the program committee, consisting, besides the editors, of
• Rocco De Nicola (University of Firenze, Italy)
• Jos Luiz Fiadeiro (ATX Software and University of Lisbon, Portugal)
• Roberto Gorrieri (University of Bologna, Italy)
• Paola Inverardi (University L'Aquila, Italy)
• Joost Kok (University of Leiden, The Netherlands)
• Antonio Porto (New University of Lisbon, Portugal)
We would like to thank them together with the authors of the papers for their contribution to the meeting. We would also like to thank Michael Mislove for his help in editing this volume. Finally, we are grateful to L. Brim, M. Kretinsky, and A. Kucera for their help in organizing the workshop in Brno.  相似文献   

15.
Much work has been done to clarify the notion of metamodelling and new ideas, such as strict metamodelling, distinction between ontological and linguistic instantiation, unified modelling elements and deep instantiation, have been introduced. However, many of these ideas have not yet been fully developed and integrated into modelling languages with (concrete) syntax, rigorous semantics and tool support. Consequently, applying these ideas in practice and reasoning about their meaning is difficult, if not impossible. In this paper, we strive to add semantic rigour and conceptual clarity to metamodelling through the introduction of Nivel, a novel metamodelling language capable of expressing models spanning an arbitrary number of levels. Nivel is based on a core set of conceptual modelling concepts: class, generalisation, instantiation, attribute, value and association. Nivel adheres to a form of strict metamodelling and supports deep instantiation of classes, associations and attributes. A formal semantics is given for Nivel by translation to weight constraint rule language (WCRL), which enables decidable, automated reasoning about Nivel. The modelling facilities of Nivel and the utility of the formalisation are demonstrated in a case study on feature modelling.
Timo AsikainenEmail:
  相似文献   

16.
Booker  Lashon B. 《Machine Learning》1988,3(2-3):161-192
Most classifier systems learn a collection of stimulus-response rules, each of which directly acts on the problem-solving environment and accrues strength proportional to the overt reward expected from the behavioral sequences in which the rule participates. Gofer is an example of a classifier system that builds an internal model of its environment, using rules to represent objects, goals, and relationships. The model is used to direct behavior, and learning is triggered whenever the model proves to be an inadequate basis for generating behavior in a given situation. This means that overt external rewards are not necessarily the only or the most useful source of feedback for inductive change. Gofer is tested in a simple two-dimensional world where it learns to locate food and avoid noxious stimulation.  相似文献   

17.
Karp  Peter D. 《Machine Learning》1993,12(1-3):89-116
Hypothesis-formation problems occur when the outcome of an experiment as predicted by a scientific theory does not match the outcome observed by a scientist. The problem is to modify the theory, and/or the scientist's conception of the intial conditions of the experiment, such that the prediction agrees with the observation. I treat hypothesis formation as adesign problem. A program calledHypgene designs hypotheses by reasoning backward from its goal of eliminating the difference between prediction and observation. This prediction error is eliminated bydesign operators that are applied by a planning system. The synthetic, goal-directed application of these operators should prove more efficient than past generate-and-test approaches to hypothesis generation.Hypgene uses heuristic search to guide a generator that is focused on the errors in a prediction. The advantages of the design approach to hypothesis formation over the generate-and-test approach are analogous to the advantages of dependency-directed backtracking over chronological backtracking. These hypothesis-formation methods were developed in the context of a historical study of a scientific research program in molecular biology. This article describes in detail the results of applying theHypgene program to several hypothesis-formation problems identified in this historical study.Hypgene found most of the same solutions as did the biologists, which demonstrates that it is capable of solving complex, real-world hypothesis-formation problems.  相似文献   

18.
In this article, we present the design of an intrusion detection system for voice over IP (VoIP) networks. The first part of our work consists of a simple single- component intrusion detection system called Scidive. In the second part, we extend the design of Scidive and build a distributed and correlation-based intrusion detection system called Space Dive. We create several attack scenarios and evaluate the accuracy and efficiency of the system in the face of these attacks. To the best of our knowledge, this is the first comprehensive look at the problem of intrusion detection in VoIP systems. It includes treatment of the challenges faced due to the distributed nature of the system, the nature of the VoIP traffic, and the specific kinds of attacks at such systems. Y.-S. Wu and V. Apte contributed equally to the paper.  相似文献   

19.
Performance analysis plays an increasingly important role in the design of embedded real-time systems. Time-to-market pressure in this domain is high while the available implementation technology is often pushed to its limit to minimize cost. This requires analysis of performance as early as possible in the life cycle. Simulation-based techniques are often not sufficiently productive. We present an alternative, analytical, approach based on Real-Time Calculus. Modular performance analysis is presented through a case study in which several candidate architectures are evaluated for a distributed in-car radio navigation system. The analysis is efficient due to the high abstraction level of the model, which makes the technique suitable for early design exploration. This work has been carried out as part of the boderc project under the responsibility of the Embedded Systems Institute  相似文献   

20.
Hierarchical Fusion of Multiple Classifiers for Hyperspectral Data Analysis   总被引:3,自引:0,他引:3  
Many classification problems involve high dimensional inputs and a large number of classes. Multiclassifier fusion approaches to such difficult problems typically centre around smart feature extraction, input resampling methods, or input space partitioning to exploit modular learning. In this paper, we investigate how partitioning of the output space (i.e. the set of class labels) can be exploited in a multiclassifier fusion framework to simplify such problems and to yield better solutions. Specifically, we introduce a hierarchical technique to recursively decompose a C-class problem into C_1 two-(meta) class problems. A generalised modular learning framework is used to partition a set of classes into two disjoint groups called meta-classes. The coupled problems of finding a good partition and of searching for a linear feature extractor that best discriminates the resulting two meta-classes are solved simultaneously at each stage of the recursive algorithm. This results in a binary tree whose leaf nodes represent the original C classes. The proposed hierarchical multiclassifier framework is particularly effective for difficult classification problems involving a moderately large number of classes. The proposed method is illustrated on a problem related to classification of landcover using hyperspectral data: a 12-class AVIRIS subset with 180 bands. For this problem, the classification accuracies obtained were superior to most other techniques developed for hyperspectral classification. Moreover, the class hierarchies that were automatically discovered conformed very well with human domain experts’ opinions, which demonstrates the potential of using such a modular learning approach for discovering domain knowledge automatically from data. Received: 21 November 2000, Received in revised form: 02 November 2001, Accepted: 13 December 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号