首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
The use of decision tables to verify knowledge based systems (KBS) has been advocated several times in the validation and verification (V&V) literature. However, one of the main drawbacks of these systems is that they fail to detect anomalies that occur over rule chains. In a decision table based context this means that anomalies that occur due to interactions between tables are neglected. These anomalies are called inter-tabular anomalies. In this paper we investigate an approach that deals with inter-tabular anomalies. One of the prerequisites for the approach was that it could be used by the knowledge engineer during the development of the KBS. This requires that the anomaly check can be performed on-line. As a result, the approach partly uses heuristics where exhaustive checks would be too inefficient. All detection facilities that will be described have been implemented in a table-based development tool called . The use of this tool will be briefly illustrated. In addition, some experiences in verifying large knowledge bases are discussed.  相似文献   

2.
Decision tables are widely used in many knowledge-based and decision support systems. They allow relatively complex logical relationships to be represented in an easily understood form and processed efficiently. This paper describes second-order decision tables (decision tables that contain rows whose components have sets of atomic values) and their role in knowledge engineering to: (1) support efficient management and enhance comprehensibility of tabular knowledge acquired by knowledge engineers, and (2) automatically generate knowledge from a tabular set of examples. We show how second-order decision tables can be used to restructure acquired tabular knowledge into a condensed but logically equivalent second-order table. We then present the results of experiments with such restructuring. Next, we describe SORCER, a learning system that induces second-order decision tables from a given database. We compare SORCER with IDTM, a system that induces standard decision tables, and a state-of-the-art decision tree learner, C4.5. Results show that in spite of its simple induction methods, on the average over the data sets studied, SORCER has the lowest error rate.  相似文献   

3.
Decision tables     
Decision tables have been known for almost 20 years but there are still data processing shops where systems analysts and programmers have not used them at all. That is why it is felt important to re-open their case once again and publicize experience with their use, as well as more recent developments and expected future trends. It appears, from experience gained with decision tables over the past 10 years, that they can be of great help in programming, program maintenance, as well as in design communication between analysts and programmers. To assist also in training, a brief user's guide is included in this article, which is belived to make the first attempts to easier design of a decision table. Finally, the procedure adopted to implement a machine independent (portable) decision-table preprocessor for COBOL is outlined. In as much as the availability of such a preprocessor appears essential for the degree of use of decision tables, this single fact may well become a milestone in the overall popularity of decision tables world-wide.  相似文献   

4.
As two classical measures, approximation accuracy and consistency degree can be employed to evaluate the decision performance of a decision table. However, these two measures cannot give elaborate depictions of the certainty and consistency of a decision table when their values are equal to zero. To overcome this shortcoming, we first classify decision tables in rough set theory into three types according to their consistency and introduce three new measures for evaluating the decision performance of a decision-rule set extracted from a decision table. We then analyze how each of these three measures depends on the condition granulation and decision granulation of each of the three types of decision tables. Experimental analyses on three practical data sets show that the three new measures appear to be well suited for evaluating the decision performance of a decision-rule set and are much better than the two classical measures.  相似文献   

5.
该文讨论在复杂的大型辅助决策系统中,构造智能决策规则模型的一种方法。这是一种基于决策表的知识表示方法。它在传统决策表的基础上,吸收了产生式规则、框架表示法、模糊理论、关系模型等多种方法的思想和技术,把传统决策表加以扩展,得到了一种结构性好、表达能力强、可操作性较好的智能决策表达工具,用来表示大型辅助决策系统中的复杂领域知识,将其中松散的经验规则形式化成智能决策规则模型,从而增强其结构性和可操作性,有效支持对其它信息的操作。  相似文献   

6.

During the last decade, databases have been growing rapidly in size and number as a result of rapid advances in database capacity and management techniques. This expansive growth in data and databases has caused a pressing need for the development of more powerful techniques to convert the vast pool of data into valuable information. For the purpose of strategic and decision-making, many companies and researchers have recognized mining useful information and knowledge from large databases as a key research topic and as an opportunity for major revenues and improving competitiveness. In this paper, we will explore a new rule generation algorithm (based on rough sets theory) that can generate a minimal set of rule reducts, and a rule generation and rule induction program (RGRIP) which can efficiently induce decision rules from conflicting information systems. All the methods will also be illustrated with numerical examples.  相似文献   

7.
基于数据库系统的Rough集模型的扩展   总被引:1,自引:0,他引:1  
刘启和  陈雷霆  闵帆  蔡洪斌 《控制与决策》2006,21(12):1374-1378
针对基于数据库系统的Rough集模型中的知识约简算法对一致决策表适用,而对不一致决策表不适用的局限性,给出了将不一致决策表转换为一致决策表的算法,证明该算法能保持核和约简集合不变,并分析了该算法的时间复杂度,在此基础上,利用数据库系统的集合操作和SQL语言描述转换算法,将基于数据库系统的Rough集模型中的知识约筒算法扩展到不一致决策表.理论分析和实验结果表明,扩展后的算法仍是高效的.  相似文献   

8.
9.
不完备信息系统中知识获取算法   总被引:5,自引:0,他引:5  
粗糙集理论是一种新的处理模糊和不确定知识的软计算工具.应用粗糙集理论,可以将隐藏在系统的知识能够以决策规则的形式表达出来.根据粗糙集上下近似的概念,决策规则能够分成确定性规则和可能性规则两种.本文将介绍从不完备信息系统中知识获取的算法,通过这些算法能够从不完备决策表中生成一种确定性的规则和两种可能性的规则,同时也介绍了不完备决策表中描述约简的算法.  相似文献   

10.
本文说明了一种决策模型的自动生成及管理系统的设计与实现方法。该系统能够辅助人们去建立一种基于决策表的决策模型,自动进行各种检验,并能将此模型转换成另外两种决策模型。此外,它还具有决策模型库和应用案例库的管理功能。该系统已成功地运用到多个大型信息系统的开发之中。  相似文献   

11.
Reliability has become a key factor in KBS development. For this reason, it has been suggested that verification and validation (V&V) should become an integrated part of activities throughout the whole KBS development cycle. In this paper, it will be illustrated how the PROLOGA workbench integrates V&V aspects into its modelling environment, such that these techniques can be of assistance in the process of knowledge acquisition and representation. To this end, verification has to be performed incrementally and can no longer be delayed until after the system has been completed. It will be shown how this objective can be realised through an approach that uses the decision table formalism as a modelling instrument.  相似文献   

12.
Recently, as the Internet has become more widely used, Electronic Commerce (EC) has emerged and has developed a high-level business environment. The customer-centric EC model is important for the success of EC and this study presents a new customer-centric EC model in make-to-order (MTO) semiconductor manufacturing environment. In this study we proposed the EC model providing the process transparency of process sampling method that can provide online semiconductor customers with the performance information of available process sampling methods which can be used at all manufacturing process steps for their own products in MTO manufacturing environment, and then the capability to select a desirable one among them based on their purchase situations on EC web site. In the proposed EC model the customer can select a process sampling method that is most suitable to him/her according to the customer's purchase situation. In this model the use of intelligent decision support system called customized sampling decision support system (CSDSS) that can autonomously generate available customized sampling methods and provide the performance information of those methods to EC system is requisite. We implemented an Internet-based prototype of CSDSS which had an architecture based on intelligent agent technology and also the successful integration of data mining process for the generation of optimal sampling method into DSS framework by means of applying that technology.  相似文献   

13.
Aggregation in Natural Language Generation   总被引:2,自引:0,他引:2  
The content of real‐world databases, knowledge bases, database models, and formal specifications is often highly redundant and needs to be aggregated before these representations can be successfully paraphrased into natural language. To generate natural language from these representations, a number of processes must be carried out, one of which is sentence planning where the task of aggregation is carried out. Aggregation, which has been called ellipsis or coordination in Linguistics, is the process that removes redundancies during generation of a natural language discourse, without losing any information.
The article describes a set of corpus studies that focus on aggregation, provides a set of aggregation rules, and finally, shows how these rules are implemented in a couple of prototype systems. We develop further the concept of aggregation and discuss it in connection with the growing literature on the subject. This work offers a new tool for the sentence planning phase of natural language generation systems.  相似文献   

14.
在大量软件出现的今天,除开软件的功能是否完善外,对软件本身提出了更高的安全性和稳定性要求;一款软件在上线前需要进行大量的测试,以便提升软件的质量;由于开发人员参与了软件的研发及上线流程,导致了看待软件问题的局限性,而测试人员在编写测试用例时,往往由于依据文档的不一致性,使得测试用例的价值大打折扣;并且在实际软件的开发流程中,测试环节与开发严重脱节;往往只是为了出相应的测试报告而去测试,偏离了测试的初衷;针对以上问题,提出基于模型的用例生成方法,能够基于工作流程图、判定表、状态转换等多种测试方法,并在该方法中应用边界值与等价类的思想,够贯穿整个软件研发的生命周期,在软件研发初期就能够参与测试,提出设计方案的不足;并且能够自动生成测试用例,提高测试人员的效率.  相似文献   

15.
The research reported in the paper is from a decision-based design perspective wherein the principal role (but not only) role of a designer is to make decisions. Decision workflows are the processes by which the solutions pertaining to the design of complex systems are generated. Decision workflows are core to design processes, in which a set of decisions are connected (or interconnected) to generate shared and desired design outputs. Careful configuration of decision workflows is very important to ensure the generation of designs using available resources. Configuration of decision workflows is a process that requires a designer to use the basic elements to compose feasible workflows and then select an appropriate one for implementation in designing a product or a system. In this paper, we propose a template-based method for the design and execution of decision workflows associated with designing engineered systems. The value of the method is anchored in that it facilitates designers rapidly planning the processes, namely, the decision workflows, for designing products or systems. Moreover, due to the fact that these decision workflows are modeled in a computational manner, designers are able to execute decision workflows to explore the solution space and identify satisficing design solutions in early design stages. A gearbox with connected gears and shafts is a typical complex engineered system that can be partitioned into multiple levels of interacting subsystems. We illustrate the method and the decision workflows using a gear and shaft (within a gearbox) design example.  相似文献   

16.
刘奇  马娆  俞凯 《计算机学报》2022,45(2):289-301
自然语言生成是目前非常重要且具有挑战性的一类人工智能任务.长短时记忆(Long Short-Term Memory,LSTM)语言模型是目前最为主流的自然语言生成模型.但是,LSTM语言模型的训练准则是词语级别的交叉熵,这会导致暴露偏差问题.此外,一般自然语言生成任务的评测指标是序列级别的BLEU分数或者词错误率,这与...  相似文献   

17.
An important objective of data mining is the development of predictive models. Based on a number of observations, a model is constructed that allows the analysts to provide classifications or predictions for new observations. Currently, most research focuses on improving the accuracy or precision of these models and comparatively little research has been undertaken to increase their comprehensibility to the analyst or end-user. This is mainly due to the subjective nature of ‘comprehensibility’, which depends on many factors outside the model, such as the user's experience and his/her prior knowledge. Despite this influence of the observer, some representation formats are generally considered to be more easily interpretable than others. In this paper, an empirical study is presented which investigates the suitability of a number of alternative representation formats for classification when interpretability is a key requirement. The formats under consideration are decision tables, (binary) decision trees, propositional rules, and oblique rules. An end-user experiment was designed to test the accuracy, response time, and answer confidence for a set of problem-solving tasks involving the former representations. Analysis of the results reveals that decision tables perform significantly better on all three criteria, while post-test voting also reveals a clear preference of users for decision tables in terms of ease of use.  相似文献   

18.
三种差别矩阵的比较   总被引:8,自引:0,他引:8  
差别矩阵是Rough集理论中重要概念之一,使用差别矩阵可以计算决策表的核和约简。当前有多种定义差别矩阵的方法,导致差另q矩阵有多种定义的原因是决策表的不一致性。本文分析一致决策表和不一致决策表关系,给出将不一致决策表转换为一致决策表的方法,并给出差别矩阵的等价性定义。在此基础上,讨论并证明三种差别矩阵的关系,结果表明利用这种转换方法和等价性定义可以将三种差别矩阵统一起来,从而保证在实际应用中可以用统一方法隶构造差别矩阵。  相似文献   

19.
Simulation is capable to cope with the uncertain and dynamic nature of industrial value chains. However, in-depth system expertise is inevitable for mapping objects and constraints from the real world to a virtual model. This knowledge-intensity leads to long development times of respective projects, which contradicts the need for timely decision support. Since more and more companies use industrial knowledge graphs and ontologies to foster their knowledge management, this paper proposes a framework on how to efficiently derive a simulation model from such semantic knowledge bases. As part of the approach, a novel Simulation Ontology provides a standardized meta-model for hybrid simulations. Its instantiation enables the user to come up with a fully parameterized formal simulation model. Newly developed Mapping Rules facilitate this process by providing guidance on how to turn knowledge from existing ontologies, which describe the system to be simulated, into instances of the Simulation Ontology. The framework is completed by a parsing procedure for an automated transformation of this conceptual model into an executable one. This novel modeling approach makes model development more efficient by reducing its complexity. It is validated in a use case implementation from semiconductor manufacturing, where cross-domain knowledge was required in order to model and simulate the impacts of the COVID-19 pandemic on a global supply chain network.  相似文献   

20.
流表是软件定义网络控制平面与数据平面交互的核心组件,也是实现安全策略全局协同及动态映射的关键。然而,构建具备相关安全策略的流表却需应对流知识要素过于分散、不断扩充、难以通过独立应用或预设规则满足等诸多难点。针对这一现状问题,本文通过采取在软件定义网络控制、数据和应用等三大平面之外新建知识平面的方式,构建流表及其相关安全知识要素聚集的流知识图谱,并基于此选择或生成流表规则。在流规则选择方面,构建同源-目的地址单条/合成流规则合并的流规则搜索树并关联流知识图谱,达到对已有流规则快速选择并决策的目的;在流规则学习生成方面,以流规则搜索树图融合的方式分裂生成流规则安全决策图,以此根据流标记生成或选择流规则。在评估部分,本文通过与应用平面交互、流规则选择、流规则学习等三个角度观察流知识图谱的实际应用方向及可能性,并通过实验衡量了基于流知识图谱的关键算法性能。以流知识平面的图谱等为基础设施,可近一步深入具体场景,通过流安全标记与应用相结合的方式,促进流规则演进等实践开展。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号