首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Neurules are a type of neuro‐symbolic rules integrating neurocomputing and production rules. Each neurule is represented as an adaline unit. Neurules exhibit characteristics such as modularity, naturalness and ability to perform interactive and integrated inferences and provide explanations for reached conclusions. One way of producing a neurule base is through conversion of an existing symbolic rule base yielding an equivalent but more compact rule base. The conversion process merges symbolic rules having the same conclusion into one or more neurules. Because of the inability of the adaline unit to handle inseparability, more than one neurule for each conclusion may be produced by splitting the initial set of symbolic rules into subsets. This paper presents research work improving the conversion process in terms of runtime and number of produced neurules. First, we show how easier it is to construct a neurule base than a connectionist one. Second, we present alternative rule set splitting methods. Finally, we define criteria concerning the ability or inability to convert a rule set into a single, equivalent, but more compact rule. With application of such mergability criteria, the conversion process of symbolic rules into neurules becomes more time‐efficient. All the aforementioned are supported by experimental results.  相似文献   

2.
Abstract: Neurules are a kind of hybrid rules that combine a symbolic (production rules) and a connectionist (adaline unit) representation. One way that the neurules can be produced is from training examples/patterns, extracted from empirical data. However, in certain application fields not all of the training examples are available a priori. A number of them become available over time. In those cases, updating the neurule base is necessary. In this paper, methods for updating a hybrid rule base, consisting of neurules, to reflect the availability of new training examples are presented. They can be considered as a type of incremental learning method that retains the entire induced hypothesis and all past training examples. The methods are efficient, since they require the least possible retraining effort and the number of neurules produced is kept as small as possible. Experimental results that prove the above argument are presented.  相似文献   

3.
信息处理过程中对异常信息的智能化处理是一个前沿的且富有挑战性的研究方向;针对所获取的信息由于噪声干扰等因素存在缺失这一异常现象,提出了一种不完整(缺失)数据的智能分类算法;对于某一个不完整样本,该方法首先根据找到的近邻类别信息得到单个或多个版本的估计样本,这样在保证插补的准确性的同时能够有效地表征由于缺失引起的不精确性,然后用分类器分类带有估计值的样本;最后,在证据推理框架下提出一种新的信任分类方法,将难以划分类别的样本分配到对应的复合类来描述由于缺失值引起的样本类别的不确定性,同时降低错误分类的风险;用UCI数据库的真实数据集来验证算法的有效性,实验结果表明该算法能够有效地处理不完整数据分类问题.  相似文献   

4.
In most data fusion systems, the information extracted from each sensor (either numerical or symbolic) is represented as a degree of belief in an event with real values, taking in this way into account the imprecise, uncertain, and incomplete nature of the information. The combination of such degrees of belief is performed through numerical fusion operators. A very large variety of such operators has been proposed in the literature. We propose in this paper a classification of these operators issued from the different data fusion theories with respect to their behavior. Three classes are thus defined. This classification provides a guide for choosing an operator in a given problem. This choice can then be refined from the desired properties of the operators, from their decisiveness, and by examining how they deal with conflictive situations  相似文献   

5.
6.
Assessment of applications for life insurance is an important task in the insurance sector that concerns estimation of potential risks underlying an application, if accepted. This task is accomplished by specialized personnel of insurance companies. Because of recent financial crises, this task is more demanding, and intelligent computer‐based methods could be employed to assist. In this paper, we present an intelligent approach to assessment of life insurance applications, which is based on an integration of neurule‐based with case‐based reasoning. Neurules are a type of neuro‐symbolic rules that combine a symbolic (production rules) and a connectionist (adaline unit) representation. A characteristic of neurules is that in contrast to other hybrid neuro‐symbolic approaches, they retain the naturalness and modularity of symbolic rules. Neurules are produced from available symbolic rules that represent general knowledge, which however do not completely cover the domain. We use health condition, age, gender, annual income, profession, insurance type and primary life insurance benefit as assessment parameters used in rule conditions. The integration of neurules and cases employs different types of indices for the cases according to different roles they play in neurule‐based reasoning. This results in its accuracy improvement. Experimental results demonstrate the effectiveness of the approach.  相似文献   

7.
数据驱动的扩展置信规则库专家系统能够处理含有定量数据或定性知识的不确定性问题.该方法已被广泛地研究和应用,但仍缺乏在不完整数据问题上的研究.鉴于此,针对不完整数据集上的问题,提出一种新的扩展置信规则库专家系统推理方法.首先提出基于析取范式的扩展规则结构,并通过实验讨论了在新的规则结构下,置信规则前提属性参考值个数对推理...  相似文献   

8.
《Computers & Geosciences》2006,32(9):1368-1377
SQL is the (more or less) standardised language that is used by the majority of commercial database management systems. However, it is seriously flawed, as has been documented in detail by Date, Darwen, Pascal, and others. One of the most serious problems with SQL is the way it handles missing data. It uses a special value ‘NULL’ to represent data items whose value is not known. This can have a variety of meanings in different circumstances (such as ‘inapplicable’ or ‘unknown’). The SQL language also allows an ‘unknown’ truth value in logical expressions. The resulting incomplete three-valued logic leads to inconsistencies in data handling within relational database management systems. Relational database theorists advocate that a strict two-valued logic (true/false) be used instead, with prohibition of the use of NULL, and justify this stance by assertion that it is a true representation of the ‘real world’. Nevertheless, in real geoscience data there is a complete gradation between exact values and missing data: for example, geochemical analyses are inexact (and the uncertainty should be recorded); the precision of numeric or textual data may also be expressed qualitatively by terms such as ‘approximately’ or ‘possibly’. Furthermore, some data are by their nature incomplete: for example, where samples could not be collected or measurements could not be taken because of inaccessibility.It is proposed in this paper that the best way to handle such data sets is to replace the closed-world assumption and its concomitant strict two-valued logic, upon which the present relational database model is based, by the open-world assumption which allows for other logical values in addition to the extremes of ‘true’ and ‘false’. Possible frameworks for such a system are explored, and could use Codd's ‘marks’, Darwen's approach (recording the status of information known about each data item), or other approaches such as fuzzy logic.  相似文献   

9.
Data with missing values,or incomplete information,brings some challenges to the development of classification,as the incompleteness may significantly affect the performance of classifiers.In this paper,we handle missing values in both training and test sets with uncertainty and imprecision reasoning by proposing a new belief combination of classifier(BCC)method based on the evidence theory.The proposed BCC method aims to improve the classification performance of incomplete data by characterizing the uncertainty and imprecision brought by incompleteness.In BCC,different attributes are regarded as independent sources,and the collection of each attribute is considered as a subset.Then,multiple classifiers are trained with each subset independently and allow each observed attribute to provide a sub-classification result for the query pattern.Finally,these sub-classification results with different weights(discounting factors)are used to provide supplementary information to jointly determine the final classes of query patterns.The weights consist of two aspects:global and local.The global weight calculated by an optimization function is employed to represent the reliability of each classifier,and the local weight obtained by mining attribute distribution characteristics is used to quantify the importance of observed attributes to the pattern classification.Abundant comparative experiments including seven methods on twelve datasets are executed,demonstrating the out-performance of BCC over all baseline methods in terms of accuracy,precision,recall,F1 measure,with pertinent computational costs.  相似文献   

10.
Real-world problems often require purely deductive reasoning to be supported by other techniques that can cope with noise in the form of incomplete and uncertain data. Abductive inference tackles incompleteness by guessing unknown information, provided that it is compliant with given constraints. Probabilistic reasoning tackles uncertainty by weakening the sharp logical approach. This work aims at bringing both together and at further extending the expressive power of the resulting framework, called Probabilistic Expressive Abductive Logic Programming (PEALP). It adopts a Logic Programming perspective, introducing several kinds of constraints and allowing to set a degree of strength on their validity. Procedures to handle both extensions, compatibly with standard abductive and probabilistic frameworks, are also provided.  相似文献   

11.
In this paper, we extend the original belief rule-base inference methodology using the evidential reasoning approach by i) introducing generalised belief rules as knowledge representation scheme, and ii) using the evidential reasoning rule for evidence combination in the rule-base inference methodology instead of the evidential reasoning approach. The result is a new rule-base inference methodology which is able to handle a combination of various types of uncertainty.Generalised belief rules are an extension of traditional rules where each consequent of a generalised belief rule is a belief distribution defined on the power set of propositions, or possible outcomes, that are assumed to be collectively exhaustive and mutually exclusive. This novel extension allows any combination of certain, uncertain, interval, partial or incomplete judgements to be represented as rule-based knowledge. It is shown that traditional IF-THEN rules, probabilistic IF-THEN rules, and interval rules are all special cases of the new generalised belief rules.The rule-base inference methodology has been updated to enable inference within generalised belief rule bases. The evidential reasoning rule for evidence combination is used for the aggregation of belief distributions of rule consequents.  相似文献   

12.
In this paper, we present a new symbolic approach to deal with the uncertainty encountered in common-sense reasoning. This approach enables us to represent the uncertainty by using linguistic expressions of the interval [Certain, Totally uncertain]. The original uncertainty scale that we use here, presents some advantages over other scales in the representation and in the management of the uncertainty. The axioms of our theory are inspired by Shannon's entropy theory and built on the substrate of a symbolic many-valued logic. So, the uncertainty management in the symbolic logic framework leads to new generalizations of classical inference rules.  相似文献   

13.
Uncertainty in service management stems from the incompleteness and vagueness of the conditioning attributes that characterize a service. In particular, location based services often have complex interaction mechanisms in terms of their neighborhood relationships. Classical location service models require rigorous parameters and conditioning attributes and offers limited flexibility to incorporate imprecise or ambiguous evidences. In this paper we have developed a formal model of uncertainty in service management. We have developed a rough set and Dempster–Shafer’s evidence theory based formalism to objectively represent uncertainty inherent in the process of service discovery, characterization, and classification. Rough set theory is ideally suited for dealing with limited resolution, vague and incomplete information, while Dempster–Shafer’s evidence theory provides a consistent approach to model an expert’s belief and ignorance in the classification decision process. Integrating these two formal approaches in spatial domain provides a way to model an expert’s belief and ignorance in service classification. In an application scenario of the model we have used a cognitive map of retail site assessment, which reflects the partially subjective assessment process. The uncertainty hidden in the cognitive map can be consistently formalized using the proposed model. Thus we provide a naturalistic means of incorporating both qualitative aspects of intuitive knowledge as well as hard empirical information for service management within a formal uncertainty framework.  相似文献   

14.
The optimization of spare parts inventory for equipment system is becoming a dominant support strategy, especially in the defense industry. Tremendous researches have been made to achieve optimal support performance of the supply system. However, the lack of statistical data brings limitations to these optimization models which are based on probability theory. In this paper, personal belief degree is adopted to compensate the data deficiency, and the uncertainty theory is employed to characterize uncertainty arising from subjective personal cognition. A base-depot support system is taken into consideration in the presence of uncertainty, supplying repairable spare parts for equipment system. With some constraints such as costs and supply availability, the minimal expected backorder model and the minimal backorder rate model will be presented based on uncertain measure. Genetic algorithm is adopted in this paper to search for optimal solution. Finally, a numerical example is employed to illustrate the feasibility of the optimization models.  相似文献   

15.
In the cases that the historical data of an uncertain event is not available, belief degree-based uncertainty theory is a useful tool to reflect such uncertainty. This study focuses on uncertain bi-objective supply chain network design problem with cost and environmental impacts under uncertainty. As such network may be designed for the first time in a geographical region, this problem is modelled by the concepts of belief degree-based uncertainty theory. This article is almost the first study on belief degree-based uncertain supply chain network design problem with environmental impacts. Two approaches such as expected value model and chance-constrained model are applied to convert the proposed uncertain problem to its crisp form. The obtained crisp forms are solved by some multi-objective optimization approaches of the literature such as TH, Niroomand, MMNV. A deep computational study with several test problems are performed to study the performance of the crisp models and the solution approaches. According to the results, the obtained crisp formulations are highly sensitive to the changes in the value of the cost parameters. On the other hand, Niroomand and MMNV solution approaches perform better than other solution approaches from the solution quality point of view.  相似文献   

16.
Management of data imprecision and uncertainty has become increasingly important, especially in situation awareness and assessment applications where reliability of the decision-making process is critical (e.g., in military battlefields). These applications require the following: 1) an effective methodology for modeling data imperfections and 2) procedures for enabling knowledge discovery and quantifying and propagating partial or incomplete knowledge throughout the decision-making process. In this paper, using a Dempster-Shafer belief-theoretic relational database (DS-DB) that can conveniently represent a wider class of data imperfections, an association rule mining (ARM)-based classification algorithm possessing the desirable functionality is proposed. For this purpose, various ARM-related notions are revisited so that they could be applied in the presence of data imperfections. A data structure called belief itemset tree is used to efficiently extract frequent itemsets and generate association rules from the proposed DS-DB. This set of rules is used as the basis on which an unknown data record, whose attributes are represented via belief functions, is classified. These algorithms are validated on a simplified situation assessment scenario where sensor observations may have caused data imperfections in both attribute values and class labels.  相似文献   

17.
In this paper, we present the architecture and describe the functionality of an Intelligent Tutoring System (ITS), which uses an expert system to make decisions during the teaching process. The expert system uses neurules for knowledge representation of the pedagogical knowledge. Neurules are a type of hybrid rules integrating symbolic rules with neurocomputing. The expert system consists of three components: the user modelling unit, the pedagogical unit and the inference system. The pedagogical knowledge is distributed in a number of neurule bases within the user modelling and the pedagogical unit. Another important component of the ITS, for both its development and maintenance, is its knowledge management unit, which provides knowledge acquisition and knowledge update capabilities to the system, that is, offers expert knowledge authoring capabilities to the system.  相似文献   

18.
陈楠楠  巩晓婷  傅仰耿   《智能系统学报》2019,14(6):1179-1188
数据驱动的扩展置信规则库系统,是在传统置信规则库的基础上利用关系数据来生成规则,使用该方法构建规则库简单有效。然而,该方法激活的规则存在不一致与不完整,并且该方法无法处理零激活的输入。鉴于此,本文提出基于改进规则激活率的扩展置信规则库方法,通过高斯核改进个体匹配度计算方法,权衡激活规则的一致性与完整性,并利用k近邻思想解决规则零激活问题。最后,本文选取非线性函数拟合实验和输油管道检漏实验来检验所提方法的效率和准确度。实验结果表明该方法既保证了扩展置信规则库系统的推理效率,也提高了推理结果的精度。  相似文献   

19.
20.
Building systems that acquire, process and reason with context data is a major challenge. Model updates and modifications are required for the mobile context-aware systems. Additionally, the nature of the sensor-based systems implies that the data required for the reasoning is not always available nor it is certain. Finally, the amount of context data can be significant and can grow fast, constantly being processed and interpreted under soft real-time constraints. Such characteristics make it a case for a challenging big data application. In this paper we argue, that mobile context-aware systems require specific methods to process big data related to context, at the same time being able to handle uncertainty and dynamics of this data. We identify and define main requirements and challenges for developing such systems. Then we discuss how these challenges were effectively addressed in the KnowMe project. In our solution, the acquisition of context data is made with the use of the AWARE platform. We extended it with techniques that can minimise the power consumption as well as conserve storage on a mobile device. The data can then be used to build rule models that can express user preferences and habits. We handle the missing or ambiguous data with number of uncertainty management techniques. Reasoning with rule models is provided by a rule engine developed for mobile platforms. Finally, we demonstrate how our tools can be used to visualise the stored data and simulate the operation of the system in a testing environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号