首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Shadbolt  N. Motta  E. Rouge  A. 《Software, IEEE》1993,10(6):34-38
Vital, a four-and-a-half-year ESPRIT II research and development project that involves nine organizations in five countries is discussed. It addresses the problems of effective process modeling for knowledge-based systems, providing guidelines on when to use various knowledge-engineering methods and techniques, and reducing the bottleneck in acquiring expert knowledge by providing both methodological and software support for developing large, industrial, knowledge-based system applications. The project goals, approach, and workbench are outlined, and a case study is described  相似文献   

2.
The authors propose a model for an intelligent assistant to aid in building knowledge-based systems (KBSs) and discuss a preliminary implementation. The assistant participates in KBS construction, including acquisition of an initial model of a problem domain, acquisition of control and task-specific inference knowledge, testing and validation, and long-term maintenance of encoded knowledge. The authors present a hypothetical scenario in which the assistant and a KBS designer cooperate to create an initial domain model and then discuss five categories of knowledge the assistant requires to offer such help. They discuss two software technologies on which the assistant is based: an object-oriented programming language, and a user-interface framework  相似文献   

3.
4.
In this paper, a microrobot soccer-playing game, such as that of MIROSOT (Microrobot World Cup Soccer Tournament), is adopted as a standard test bed for research on multiple-agent cooperative systems. It is considerably complex and requires expertise in several difficult research topics, such as mobile microrobot design, motor control, sensor technology, intelligent strategy planning, etc., to build up a complete system to play the game. In addition, because it is an antagonistic game, it appears ideal to test whether one method is better than other. To date there have been two different kinds of architecture for building such system. One is called vision-based or centralized architecture, and the other is known as robot-based or decentralized architecture. The main difference between them lies in whether there exists a host computer system which responds to data processing and strategy planning, and a global vision system which can view the whole playground and transfer the environment information to the host computer in real time. We believe that the decentralized approach is more advanced, but in the preliminary step of our study, we used the centralized approach because it can lighten any overload of the microrobot design. In this paper, a simplified layer model of the multiple-agent cooperative system is first proposed. Based on such a model, a system for a microrobot soccer-playing game is organized. At the same time a simple genetic algorithm (SGA) is used for the autonomous evolution of cooperative behavior among microrobots. Finally, a computer simulation system is introduced and some simulated results are explained. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998.  相似文献   

5.
《Knowledge》2000,13(4):177-198
Slicing is a process for automatically obtaining subparts of a program responsible for specific computations. It has been employed within conventional procedural programming to solve a number of software development issues. We have adapted and extended slicing techniques originally proposed for procedural languages, to knowledge-based systems. Our techniques comprise a representation proposal for the successful and failed inferences performed by the system, a means to detect and represent the dependences among parts of the system, a formal definition of relevance among these parts and an algorithm proven correct to obtain executable slices of a system. We illustrate the usefulness of the slicing process with practical applications.  相似文献   

6.
This paper describes the Quality and Experience Metric (QUEM), a method for estimating the skill level of a knowledge based system based on the quality of the solutions it produces. It allows one to assess how many years of experience the system would be judged to have if it were a human by providing a quantitative measure of the system's overall competence. QUEM can be viewed as a type of achievement or job placement test administered to knowledge based systems to help system designers determine how the system should be used and by what level of user. To apply QUEM, a set of subjects, experienced judges, and problems must be identified. The subjects should have a broad range of experience levels. Subjects and the knowledge based system are asked to solve the problems; and judges are asked to rank order all solutions, from worst quality to best. The data from the subjects is used to construct a skill function relating experience to solution quality, and confidence bands showing the variability in performance. The system's quality ranking is then plugged into the skill function to produce an estimate of the system's experience level. QUEM can be used to gauge the experience level of an individual system, to compare two systems, or to compare a system to its intended users. This represents an important advance in providing quantitative measures of overall performance that can be applied to a broad range of systems  相似文献   

7.
The maintenance of legacy systems is a continuous problem in the field of software maintenance. To assist in the maintenance of legacy systems, we have represented the legacy systems and the maintenance requirement in a compatible manner so that the maintenance requirement can be a clue for identifying the relevant program clauses and data items in the database. For this purpose, a maintenance component is represented by the maintenance mode (add, modify or delete) and property and key words. The corresponding information about the program's clauses is extracted from the source code of the legacy program by reverse engineering. The maintenance point identification algorithm—MPI algorithm—proposed in this research is theoretically complete and relatively efficient, and is proved so empirically. Using this approach, the system METASOFT has been developed for the Korea Electric Power Corporation which uses the COBOL programs and IMS database. It turns out that the system is well accepted by the users.  相似文献   

8.
Expert critics have been built to critique human performance in various areas such as engineering design, decision making, etc. We suggest that critics can also be useful in the building and use of knowledge based design systems (KBDSs). Knowledge engineers elicit knowledge from domain experts and build a knowledge based design system. The system generates designs. The amount of knowledge the system possesses and the way it applies the knowledge directly influence the performance of its designs. Therefore, critics are proposed to assist in: acquiring sufficient knowledge for constructing a desirable system; and applying proper knowledge to generating designs. Methodologies of equipping a KBDS with critics are developed. Our practice in building and using a KBDS shows the applicability and capability of these critics  相似文献   

9.
In this work we present a verification methodology for real-time distributed systems, based on their modular decomposition into processes. Given a distributed system, each of its components is reduced by abstracting away from details that are irrelevant for the required specification. The abstract components are then composed to form an abstract system to which a model checking procedure is applied. The abstraction relation and the specification language guarantee that if the abstract system satisfies a specification, then the original system satisfies it as well.The specification languageRTL is a branching-time version of the real-time temporal logicTPTL presented in Alur and Henzinger [1]. Its model checking is linear in the size of the system and exponential in the size of the formula. Two notions of abstraction for real-time systems are introduced, each preserving a sublanguage ofRTL.  相似文献   

10.
An approach to specification of requirements and verification of design for real-time systems is presented. A system is defined by a conventional mathematical model for a dynamic system where application specific states denote functions of real time. Specifications are formulas in duration calculus, a real-time interval logic, where predicates define durations of states. Requirements define safety and functionality constraints on the system or a component. A top-level design is given by a control law: a predicate that defines an automation controlling the transition between phases of operation. Each phase maintains certain relations among the system states; this is analogous to the control functions known from conventional control theory. The top-level design is decomposed into an architecture for a distributed system with specifications for sensor, actuator, and program components. Programs control the distributed computation through synchronous events. Sensors and actuators relate events with system states. Verification is a deduction showing that a design implies requirements  相似文献   

11.
This paper presents an approach to the problem of documenting the design of a network of components and verifying that its structure is complete and consistent, (i.e., that the components, functioning together, will satisfy the requirements of the complete product), before the components are implemented. Our approach differs from others in that both hardware and software components are viewed as hardware-like devices in which an output value can change instantaneously when input values change and all components operate synchronously rather than in sequence.We define what we mean by completeness and consistency and illustrate how the documents can be used to verify a design before it is implemented.  相似文献   

12.
One of the major obstacles to the routine exploitation of knowledge-based and expert systems, is the difficulty of validating the knowledge base, and of maintaining it in a state which reflects current knowledge. This is of particular importance for systems based on law or regulations, where it is vital that the knowledge base be a true reflection of the legal position, and where there is a constant stream of changes to the correct legal position. Maintenance Assistance for Knowledge Engineers (MAKE) is a project designed to explore these issues, and to build a set of tools which will support the validation and maintenance of knowledge bases deriving from regulations. These tools include facilities to examine the structural features of the knowledge base, so as to guard against redundancy, nonprovability and contradiction; facilities to identify parts of the knowledge base jeopardised by changes in the domain, or in the understanding of the domain; and facilities to perform a variety of “house keeping” tasks. The paper firstly analyses the different types of change that may be required to maintain the knowledge base, and then proceeds to describe the set of tools developed in the MAKE project to accomodate these changes.  相似文献   

13.
A rule-based approach for the automatic enforcement of consistency constraints is presented. In contrast to existing approaches that compile consistency checks into application programs, the approach centralizes consistency enforcement in a separate module called a knowledge-base management system. Exception handlers for constraint violations are represented as rule entities in the knowledge base. For this purpose, a new form of production rule called the activation pattern controlled rule is introduced: in contrast to classical forward chaining schemes, activation pattern controlled rules are triggered by the intent to apply a specific operation but not necessarily by the result of applying this operation. Techniques for implementing this approach are discussed, and experiments in speeding up the system performance are described. Furthermore, an argument is made for more tolerant consistency enforcement strategies, and how they can be integrated into the rule-based approach to consistency enforcement is discussed  相似文献   

14.
As the role of knowledge-based systems grows in the marketplace, the necessity of clearly communicating their knowledge to people increases. However, well-represented internally, a system's knowledge cannot be used to train, advise, or assist an individual unless it can be discussed naturally. Recent efforts to standardize knowledge coding and expert system user interfaces fall short in defining a real ability to communicate knowledge. Most systems are unable to explain their knowledge, inferences, or applicability to anyone but a well-trained, domain-knowledgeable user. In this paper, we examine the features needed to enable intelligent expression of knowledge, and survey previous work in this area. We also describe an intelligent text generator (ITG) designed as an adjunct to an object-oriented expert system. We present a structure within which a rule-based system for a given domain can be expanded to communicate its knowledge intelligibly, in any of several natural languages.  相似文献   

15.
This paper highlights the use of the parallel processing concept in knowledge-based diagnostic systems. A MIMD machine connected in a cubic mesh fashion using Parlog has been suggested to implement such systems. An algorithm supporting the concurrent execution of multiple conflict set rules of the same production system program is presented. A specific application to communication systems maintenance utilizing these principles has been discussed.  相似文献   

16.
《Knowledge》1999,12(1-2):45-54
An ontology defines the terminology of a domain of knowledge: the concepts that constitute the domain, and the relationships between those concepts. In order for two or more knowledge-based systems to interoperate—for example, by exchanging knowledge, or collaborating as agents in a co-operative problem-solving process—they must commit to the definitions in a common ontology. Verifying such commitment is therefore a prerequisite for reliable knowledge-based system interoperability. This article shows how existing knowledge base verification techniques can be applied to verify the commitment of a knowledge-based system to a given ontology. The method takes account of the fact that an ontology will typically be expressed using a different knowledge representation language to the knowledge base, by incorporating translation into the verification procedure. While the representation languages used are specific to a particular project, their features are general and the method has broad applicability.  相似文献   

17.
Validation and verification of expert systems or knowledge-based systems is a critical issue in the development and deployment of robust systems. This article is a comprehensive survey of the developments and trends in this field. More than 300 references are included in the References and Additional Readings at the end of article.  相似文献   

18.
PERFECT (Programming EnviRonment For Expert systems Constrained in reasoning Time) is aimed at providing the necessary engineering support in real-time knowledge-based system development. PERFECT bridges the gap between the traditional analysis and design methodologies, and the implementation tools for these systems. It does so by providing the means to construct a knowledge model and to choose a suitable inference strategy. Subsequently the properties of the knowledge model and inference strategy may be analysed. For instance, it may be checked whether the knowledge model contains sufficient knowledge to diagnose a fault in an industrial process. Moreover, it may be checked whether the inference engine is able to provide an answer to a certain problem in time. If not, the analyser of PERFECT proposes an alternative structure of the knowledge model. When the constructed knowledge model and the chosen inference strategy show the required time efficiency, the compiler of PERFECT may translate them to an actual real-time knowledge based system in COGSYS. In addition, guidelines are provided with respect to the design of the human-machine interface. The resulting system is an instrument—a source of information that can be used by the human operator during problem-solving, rather than a prosthesis—a device that solves the entire problem by itself and presents the outcome to the human operator.  相似文献   

19.
Existing competence systems are based on a rationalistic view of competence. While these competence systems might work in job-based organizations, we argue that in more dynamic settings, such as in knowledge-based organizations, the interest-informed actions that capture the emergent competencies of tomorrow require different types of information technology support. The main objective of this paper is to elaborate on the possibilities and implications of using interest-activated technology as a design rationale for competence systems. This paper is based on an action case study of an implemented interest-activated Intranet recommender system prototype at Volvo Information Technology AB in Gothenburg, Sweden. On the basis of how organizational members used this prototype to find information they were interested in, our research team was able to inquire into how personal interest, embodied in information-seeking activities, could be a means for identifying competence. Building on the relation between personal interest and competence, we discuss competence systems design and spell out explicit implications for managerial practice in knowledge-based organizations.  相似文献   

20.
Knowledge-based systems (KBSs) are being used in many applications areas where their failures can be costly because of losses in services, property or even life. To ensure their reliability and dependability, it is therefore important that these systems are verified and validated before they are deployed. This paper provides perspectives on issues and problems that impact the verification and validation (V&V) of KBSs. Some of the reasons why V&V of KBSs is difficult are presented. The paper also provides an overview of different techniques and tools that have been developed for performing V&V activities. Finally, some of the research issues that are relevant for future work in this field are discussed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号