首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 781 毫秒
1.
Abstract: In this paper the Web Ontology Language (OWL) is examined to instantiate expert system knowledge bases intended for semantic Web applications. In particular, OWL is analyzed for expressing Unified Modeling Language (UML) representations that have been augmented with propositional logic asserted as inter‐link constraints. The motivation is ultimately to provide declarative propositional logic constraints that can be represented in UML and declaratively implemented using OWL and other constructs to realize semantic Web knowledge base repositories and databases to facilitate expert system applications. The results of this paper show that OWL is sufficient for capturing most inter‐link constraints asserted on generalization/specialization instances; however, OWL alone is inadequate for representing some inter‐link constraints asserted on associations. We propose enhancements to OWL via RDF extensions for the reification of associations into classes. These extensions mitigate all concerns that were identified in OWL as part of this study. The result is increased support of declarative constraint representations, which can be expressed in knowledge bases in the context of the semantic Web.  相似文献   

2.
We have implemented a compiler for key parts of Modelica, an object-oriented language supporting equation-based modeling and simulation of complex physical systems. The compiler is extensible, to support experiments with emerging tools for physical models. To achieve extensibility, the implementation is done declaratively in JastAdd, a metacompilation system supporting modern attribute grammar mechanisms such as reference attributes and nonterminal attributes.This paper reports on experiences from this implementation. For name and type analyses, we illustrate how declarative design strategies, originally developed for a Java compiler, could be reused to support Modelica’s advanced features of multiple inheritance and structural subtyping. Furthermore, we present new general design strategies for declarative generation of target ASTs from source ASTs. We illustrate how these strategies are used to resolve a generics-like feature of Modelica called modifications, and to support flattening, a fundamental part of Modelica compilation. To validate that the approach is practical, we have compared the execution speed of our compiler to two existing Modelica compilers.  相似文献   

3.
This study investigated the frequency of use of information problem‐solving (IPS) skills and its relationship with learning outcomes. During the course of the study, 40 teachers carried out a collaborative IPS task in small virtual groups in a 4‐week online training course. The status of IPS skills was collected through self‐reports handed in over the course of the 4 weeks. Learning was evaluated by means of open‐ended questionnaires before and after the group task. Three types of knowledge learning were evaluated: declarative, procedural and situational. Teachers exhibited a recurrent use of all skills during the whole collaborative task, although periodic use differed from week to week. Results showed a relationship between some IPS skills and declarative and procedural knowledge. The skills that were statistically significant were share information, read peer's information and analyse information. Implications for learning support and instruction are discussed.  相似文献   

4.
This paper presents a logical formalism for representing and reasoning with statistical knowledge. One of the key features of the formalism is its ability to deal with qualitative statistical information. It is argued that statistical knowledge, especially that of a qualitative nature, is an important component of our world knowledge and that such knowledge is used in many different reasoning tasks. The work is further motivated by the observation that previous formalisms for representing probabilistic information are inadequate for representing statistical knowledge. The representation mechanism takes the form of a logic that is capable of representing a wide variety of statistical knowledge, and that possesses an intuitive formal semantics based on the simple notions of sets of objects and probabilities defined over those sets. Furthermore, a proof theory is developed and is shown to be sound and complete. The formalism offers a perspicuous and powerful representational tool for statistical knowledge, and a proof theory which provides a formal specification for a wide class of deductive inferences. The specification provided by the proof theory subsumes most probabilistic inference procedures previously developed in AI. The formalism also subsumes ordinary first-order logic, offering a smooth integration of logical and statistical knowledge.  相似文献   

5.
6.
7.
To implement diverse and partial flow of information in cognitive processes, we need a design method without explicit stipulation of domain/task-dependent information flow, together with a control scheme for guiding information processing to concern only important information depending on contexts. A computational architecture is proposed which is based on a first-order logic program with a dynamics. The declarative semantics of the logic program is defined by measuring the degree of violation in terms ofpotential energy, and a control scheme for both analog and symbol computation is derived from the resultant dynamics, which replaces domain/task-dependent procedures, hence avoiding intractable complexity in the system design. This inherent integration of the control scheme with the declarative semantics guarantees that inferences are naturally centered around relevant information in a context sensitive manner. The essence of inference mechanisms proposed so far, such as weighted abduction and marker passing, are also subsumed by this dynamical control. All this justifies further exploration for improving this sort of formalism to deal with large, real-world problems.  相似文献   

8.
We investigated theoretically and empirically a range of training schedules on tasks with three knowledge types: declarative, procedural, and perceptual-motor. We predicted performance for 6435 potential eight-block training schedules with ACT-R's declarative memory equations. Hybrid training schedules (schedules consisting of distributed and massed practice) were predicted to produce better performance than purely distributed or massed training schedules. The results of an empirical study (N = 40) testing four exemplar schedules indicated a more complex picture. There were no statistical differences among the groups in the declarative and procedural tasks. We also found that participants in the hybrid practice groups produced reliably better performance than ones in the distributed practice group for the perceptual-motor task – the results indicate training schedules with some spacing and some intensiveness may lead to better performance, particularly for perceptual-motor tasks, and that tasks with mixed types of knowledge might be better taught with a hybrid schedule.  相似文献   

9.
We introduce a fixpoint semantics for logic programs with two kinds of negation: an explicit negation and a negation-by-failure. The programs may also be prioritized, that is, their clauses may be arranged in a partial order that reflects preferences among the corresponding rules. This yields a robust framework for representing knowledge in logic programs with a considerable expressive power. The declarative semantics for such programs is particularly suitable for reasoning with uncertainty, in the sense that it pinpoints the incomplete and inconsistent parts of the data, and regards the remaining information as classically consistent. As such, this semantics allows to draw conclusions in a non-trivial way, even in cases that the logic programs under consideration are not consistent. Finally, we show that this formalism may be regarded as a simple and flexible process for belief revision.  相似文献   

10.
Human experts tend to introduce intermediate terms in giving their explanations. The expert's explanation of such terms is operational for the context that triggered the explanation; however, term definitions remain often incomplete. Further, the expert's (re) use of these terms is hierarchical (similar to natural language). In this paper, we argue that a hierarchical incremental knowledge acquisition (KA) process that captures the expert terms and operationalizes them while incompletely defined makes the KA task more effective. Towards this we present our knowledge representation formalism Nested Ripple Down Rules (NRDR) that is a substantial extension to the (Multiple Classification) Ripple Down Rule (RDR) KA framework. The incremental KA process with NRDR as the underlying knowledge representation has confirmation holistic features. This allows simultaneous incremental modelling and KA and eases the knowledge base (KB) development process.Our NRDR formalism preserves the strength of incremental refinement methods, that is the ease of maintenance of the KB. It also addresses some of their shortcomings: repetition, lack of explicit modelling and readability. KBs developed with NRDR describe an explicit model of the domain. This greatly enhances the reuseability of the acquired knowledge.This paper also presents a theoretical framework for analysing the structure of RDR in general and NRDR in particular. Using this framework, we analyse the conditions under which RDR converges towards the target KB. We discuss the maintenance problems of NRDR as a function of this convergence. Further, we analyse the conditions under which NRDR offers an effective approach for domain modelling. We show that the maintenance of NRDR requires similar effort to maintaining RDR for most of the KB development cycle. We show that when an NRDR KB shows an increase in maintenance requirement in comparison with RDR during its development, this added requirement can be automatically handled using stored past seen cases.  相似文献   

11.
Declarative systems aim at solving tasks by running inference engines on a specification, to free their users from having to specify how a task should be tackled. In order to provide such functionality, declarative systems themselves apply complex reasoning techniques, and, as a consequence, the development of such systems can be laborious work. In this paper, we demonstrate that the declarative approach can be applied to develop such systems, by tackling the tasks solved inside a declarative system declaratively. In order to do this, a meta-level representation of those specifications is often required. Furthermore, by using the language of the system for the meta-level representation, it opens the door to bootstrapping: an inference engine can be improved using the inference it performs itself.One such declarative system is the IDP knowledge base system, based on the language \(\rm FO(\cdot)^{\rm IDP}\), a rich extension of first-order logic. In this paper, we discuss how \(\rm FO(\cdot)^{\rm IDP}\) can support meta-level representations in general and which language constructs make those representations even more natural. Afterwards, we show how meta-\(\rm FO(\cdot)^{\rm IDP}\) can be applied to bootstrap its model expansion inference engine. We discuss the advantages of this approach: the resulting program is easier to understand, easier to maintain, and more flexible.  相似文献   

12.
13.
Chandrasekaran  B. 《Machine Learning》1989,4(3-4):339-345
One of the old saws about learning in AI is that an agent can only learn what it can be told, i.e., the agent has to have a vocabulary for the target structure which is to be acquired by learning. What this vocabulary is, for various tasks, is an issue that is common to whether one is building a knowledge system by learning or by other more direct forms of knowledge acquisition. I long have argued that both the forms of declarative knowledge required for problem solving as well as problem-solving strategies are functions of the problem-solving task and have identified a family of generic tasks that can be used as building blocks for the construction of knowledge systems. In this editorial, I discuss the implication of this line of research for knowledge acquisition and learning.  相似文献   

14.
The case-based learning (CBL) approach has gained attention in medical education as an alternative to traditional learning methodology. However, current CBL systems do not facilitate and provide computer-based domain knowledge to medical students for solving real-world clinical cases during CBL practice. To automate CBL, clinical documents are beneficial for constructing domain knowledge. In the literature, most systems and methodologies require a knowledge engineer to construct machine-readable knowledge. Keeping in view these facts, we present a knowledge construction methodology (KCM-CD) to construct domain knowledge ontology (i.e., structured declarative knowledge) from unstructured text in a systematic way using artificial intelligence techniques, with minimum intervention from a knowledge engineer. To utilize the strength of humans and computers, and to realize the KCM-CD methodology, an interactive case-based learning system(iCBLS) was developed. Finally, the developed ontological model was evaluated to evaluate the quality of domain knowledge in terms of coherence measure. The results showed that the overall domain model has positive coherence values, indicating that all words in each branch of the domain ontology are correlated with each other and the quality of the developed model is acceptable.  相似文献   

15.
Knowledge-based systems for document analysis and understanding (DAU) are quite useful whenever analysis has to deal with the changing of free-form document types which require different analysis components. In this case, declarative modeling is a good way to achieve flexibility. An important application domain for such systems is the business letter domain. Here, high accuracy and the correct assignment to the right people and the right processes is a crucial success factor. Our solution to this proposes a comprehensive knowledge-centered approach: we model not only comparatively static knowledge concerning document properties and analysis results within the same declarative formalism, but we also include the analysis task and the current context of the system environment within the same formalism. This allows an easy definition of new analysis tasks and also an efficient and accurate analysis by using expectations about incoming documents as context information. The approach described has been implemented within the VOPR (VOPR is an acronym for the Virtual Office PRototype.) system. This DAU system gains the required context information from a commercial workflow management system (WfMS) by constant exchanges of expectations and analysis tasks. Further interaction between these two systems covers the delivery of results from DAU to the WfMS and the delivery of corrected results vice versa. Received June 19, 1999 / Revised November 8, 2000  相似文献   

16.
《Ergonomics》2012,55(11):1801-1842
Abstract

Many system developers face the following problem: designing effective human-computer interfaces requires human factors expertise, but specialists possessing such expertise are not always available to contribute to development. This paper identifies a particular instance of the problem: that faced by military procurers who are not human factors experts when they assess whether speech-based computers will be suitable for specific future battlefield applications. The paper describes a method enabling the procurer systematically to develop simulations of future systems (task, device, and user) and to perform empirical evaluations on them. The method is modelled on the structured analysis and design methods employed by software engineers, the scope, process, and notation of which are explicit and proceduralized. A preliminary test suggests that the method has the potential to improve the quality of early speech interface assessments by procurers. However, some difficulties remain in representing declarative knowledge of device user-interaction, and in deciding an appropriate level for describing procedures to support such assessors. The implications of the work are considered for the more general transfer of human factors knowledge to non-specialists  相似文献   

17.
18.
The paper develops the Smart Object paradigm and its instantiation, which provide a new conceptualization for the modeling, design, and development of an important but little researched class of information systems, operations support systems (OSS). OSS is the authors' term for systems which provide interactive support for the management of large, complex operations environments, such as manufacturing plants, military operations, and large power generation facilities. The most salient feature of an OSS is its dynamic nature. The number and kind of elements composing the system as well as the mode of control of those elements change frequently in response to the environment. The abstraction of control and the ease with which complex dynamic control behavior can be modeled and simulated is one of the important aspects of the paradigm. The framework for the Smart Object paradigm is the fusion of object-oriented design models with declarative knowledge representation and active inferencing from AI models. Additional defining concepts from data/knowledge models, semantic data models, active databases, and frame based systems, are added to the synthesis as justified by their contribution to the ability to naturally model OSS at a high level of abstraction. The model assists in declaratively representing domain data/knowledge and its structure, and task or process knowledge, in addition to modeling multilevel control and interobject coordination  相似文献   

19.
Function in Device Representation   总被引:9,自引:0,他引:9  
We explore the meanings of the terms ‘structure’, ‘behaviour’, and, especially, ‘function’ in engineering practice. Computers provide great assistance in calculation tasks in engineering practice, but they also have great potential for helping with reasoning tasks. However, realising this vision requires precision in representing engineering knowledge, in which the terms mentioned above play a central role. We start with a simple ontology for representing objects and causal interactions between objects. Using this ontology, we investigate a range of meanings for the terms of interest. Specifically, we distinguish between function as effect on the environment, and a device-centred view of device function. In the former view, function is seen as an intended or desired role that an artifact plays in its environment. We identify an important concept called mode of deployment that is often left implicit, but whose explicit representation is necessary for correct and complete reasoning. We discuss the task of design and design verification in this framework. We end with a discussion that relates needs in the world to functions of artifacts created to satisfy the needs.  相似文献   

20.
Agents are an important technology that have the potential to take over contemporary methods for analysing, designing, and implementing complex software. The Belief-Desire-Intention (BDI) agent paradigm has proven to be one of the major approaches to intelligent agent systems, both in academia and in industry. Typical BDI agent-oriented programming languages rely on user-provided “plan libraries” to achieve goals, and online context sensitive subgoal selection and expansion. These allow for the development of systems that are extremely flexible and responsive to the environment, and as a result, well suited for complex applications with (soft) real-time reasoning and control requirements. Nonetheless, complex decision making that goes beyond, but is compatible with, run-time context-dependent plan selection is one of the most natural and important next steps within this technology. In this paper we develop a typical BDI-style agent-oriented programming language that enhances usual BDI programming style with three distinguished features: declarative goals, look-ahead planning, and failure handling. First, an account that mixes both procedural and declarative aspects of goals is necessary in order to reason about important properties of goals and to decouple plans from what these plans are meant to achieve. Second, lookahead deliberation about the effects of one choice of expansion over another is clearly desirable or even mandatory in many circumstances so as to guarantee goal achievability and to avoid undesired situations. Finally, a failure handling mechanism, suitably integrated with both declarative goals and planning, is required in order to model an adequate level of commitment to goals, as well as to be consistent with most real BDI implemented systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号