首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 154 毫秒
1.
提出了一种基于模糊神经网络的数据采掘新方法。该方法首先基于Rough sets思想获取初始规则和训练集,基于采掘属性的数目和分类目标确定网络结构,通过遗传(GA)算法对网络进行优化,通过BP算法实现网络权值的在线调整,最后对所生成的规则进行简化,提取模糊规则。仿真实例结果表明,该方法是行之有效的。  相似文献   

2.
基于神经网络结构学习和知识求精方法   总被引:2,自引:0,他引:2  
知识求精是知识获取中必不可少的步骤。已有的用于知识求精的KBANN方法,主要局限性是训练时不能改变网络的拓扑结构。  相似文献   

3.
一种模糊逻辑推理神经网络的结构及算法设计   总被引:11,自引:0,他引:11  
建立了一种基于模糊逻辑推理的神经网络.由样本获取的初始规则确定规则层神经元个数,并确立模糊化层与规则层之间的连接.利用黄金分割法确定模糊化层隶属度函数的初始中心和宽度;根据初始规则的结论确定清晰化层的初始权值;针对网络结构提出了改进的BP算法.仿真实例表明,网络结构合理。具有较好的非线性映射能力,改进的BP算法适合于此网络,与另一种模糊神经网络相比较具有较快的训练速度和较好的泛化能力.  相似文献   

4.
提出一种基于模糊神经网络的飞机某系统故障诊断方法。利用改进的模糊C均-值聚类算法进行结构辨识,从而自动获得模糊规则库,并得到模糊模型的初始参数;然后生成与之相匹配的初始模糊神经网络,并通过学习算法训练网络来进行参数辨识,得到一个精确的模糊模型。将该系统地面实测数据作为样本数据,建立起了基于模糊神经网络的飞机某系统故障诊断模型。最后对该模型进行测试与分析,结果表明该方法具有抗噪、抗敏感、诊断准确度高等优点。  相似文献   

5.
文章介绍了一种基于进化式模糊神经网络时间预测系统,它是一种快速自适应的局部学习模型;进化式模糊神经网络是一个特殊类型的神经网络,它能通过进化其结构和参数来容纳新的数据.文章重点介绍了网络结构、学习方法及创建、修剪、聚合规则节点的算法;实验结果表明:模糊隶属函数的个数,规则的修剪和聚合等训练参数,与网络的行为和预测结果有很重要的关系.  相似文献   

6.
针对神经网络初始结构的设定依赖于工作者的经验、自适应能力较差等问题,提出一种基于半监督学习(SSL)算法的动态神经网络结构设计方法。该方法采用半监督学习方法利用已标记样例和无标记样例对神经网络进行训练,得到一个性能较为完善的初始网络结构,之后采用全局敏感度分析法(GSA)对网络隐层神经元输出权值进行分析,判断隐层神经元对网络输出的影响程度,即其敏感度值大小,适时地删减敏感度值很小的神经元或增加敏感度值较大的神经元,实现动态神经网络结构的优化设计,并给出了网络结构变化过程中收敛性的证明。理论分析和Matlab仿真实验表明,基于SSL算法的神经网络隐层神经元会随训练时间而改变,实现了网络结构动态设计。在液压厚度自动控制(AGC)系统应用中,大约在160 s时系统输出达到稳定,输出误差大约为0.03 mm,与监督学习(SL)方法和无监督学习(USL)方法相比,输出误差分别减小了0.03 mm和0.02 mm,这表明基于SSL算法的动态网络在实际应用中能有效提高系统输出的准确性。  相似文献   

7.
基于人工神经网络的知识获取方法   总被引:8,自引:1,他引:7  
知识获取是研制和开发专家系统的瓶颈。本文从三个方面研究了基于神经网络的知识获取方法,即通过实例学习获取知识、基于神经网络的知识求精以及从神经网络提取规则知识,分析了各自的原理及其存在的问题。  相似文献   

8.
文章介绍了一种基于进化式模糊神经网络时间预测系统,它是一种快速自适应的局部学习模型;进化式模糊神经网络是一个特殊类型的神经网络,它能通过进化其结构和参数来容纳新的数据。文章重点介绍了网络结构、学习方法及创建、修剪、聚合规则节点的算法;实验结果表明:模糊隶属函数的个数,规则的修剪和聚合等训练参数,与网络的行为和预测结果有很重要的关系。  相似文献   

9.
基于深度学习理论,将图像去噪过程看成神经网络的拟合过程,构造简洁高效的复合卷积神经网络,提出基于复合卷积神经网络的图像去噪算法.算法第1阶段由2个2层的卷积网络构成,分别训练阶段2中的3层卷积网络中的部分初始卷积核,缩短阶段2中网络的训练时间和增强算法的鲁棒性.最后运用阶段2中的卷积网络对新的噪声图像进行有效去噪.实验表明文中算法在峰值信噪比、结构相识度及均方根误差指数上与当前较好的图像去噪算法相当,尤其当噪声加强时效果更佳且训练时间较短.  相似文献   

10.
张峰  李守智 《信息与控制》2006,35(5):588-592
提出了一种新的基于T-S模糊模型的建模方法,首先通过一种局部线性聚类算法,自适应确定模糊规则数目及初始T-S模型的前提和结论参数,建立相应的一阶T-S模糊神经网络.并用梯度下降和递推最小二乘混合算法训练网络参数,从而提高建模精度.最后,通过两个仿真实例验证了本文方法的有效性.  相似文献   

11.
This paper introduces a hybrid system termed cascade adaptive resonance theory mapping (ARTMAP) that incorporates symbolic knowledge into neural-network learning and recognition. Cascade ARTMAP, a generalization of fuzzy ARTMAP, represents intermediate attributes and rule cascades of rule-based knowledge explicitly and performs multistep inferencing. A rule insertion algorithm translates if-then symbolic rules into cascade ARTMAP architecture. Besides that initializing networks with prior knowledge can improve predictive accuracy and learning efficiency, the inserted symbolic knowledge can be refined and enhanced by the cascade ARTMAP learning algorithm. By preserving symbolic rule form during learning, the rules extracted from cascade ARTMAP can be compared directly with the originally inserted rules. Simulations on an animal identification problem indicate that a priori symbolic knowledge always improves system performance, especially with a small training set. Benchmark study on a DNA promoter recognition problem shows that with the added advantage of fast learning, cascade ARTMAP rule insertion and refinement algorithms produce performance superior to those of other machine learning systems and an alternative hybrid system known as knowledge-based artificial neural network (KBANN). Also, the rules extracted from cascade ARTMAP are more accurate and much cleaner than the NofM rules extracted from KBANN.  相似文献   

12.
Traditional connectionist theory-refinement systems map the dependencies of a domain-specific rule base into a neural network, and then refine this network using neural learning techniques. Most of these systems, however, lack the ability to refine their network's topology and are thus unable to add new rules to the (reformulated) rule base. Therefore, with domain theories that lack rules, generalization is poor, and training can corrupt the original rules — even those that were initially correct. The paper presents TopGen, an extension to the KBANN algorithm, which heuristically searches for possible expansions to the KBANN network. TopGen does this by dynamically adding hidden nodes to the neural representation of the domain theory, in a manner that is analogous to the adding of rules and conjuncts to the symbolic rule base. Experiments indicate that the method is able to heuristically find effective places to add nodes to the knowledge bases of four real-world problems, as well as an artificial chess domain. The experiments also verify that new nodes must be added in an intelligent manner. The algorithm showed statistically significant improvements over the KBANN algorithm in all five domains.  相似文献   

13.
The KBANN (knowledge-based artificial neural networks) approach uses neural networks to refine knowledge that can be written in the form of simple propositional rules. This idea is extended by presenting the MANNIDENT (multivariable artificial neural network identification) algorithm by which the mathematical equations of linear process models determine the topology and initial weights of a network, which is further trained using backpropagation. This method is applied to the task of modelling a non-isothermal CSTR in which a first-order exothermic reaction is occurring. This method produces statistically significant gains in accuracy over both a standard neural network approach and a linear model. Furthermore, using the approximate linear model to initialize the weights of the network produces statistically less variation in accuracy. By structuring the neural network according to the approximate linear model, the model can be readily interpreted.  相似文献   

14.
The knowledge-based artificial neural network (KBANN) is composed of phases involving the expression of domain knowledge, the abstraction of domain knowledge at neural networks, the training of neural networks, and finally, the extraction of rules from trained neural networks. The KBANN attempts to open up the neural network black box and generates symbolic rules with (approximately) the same predictive power as the neural network itself. An advantage of using KBANN is that the neural network considers the contribution of the inputs towards classification as a group, while rule-based algorithms like C5.0 measure the individual contribution of the inputs one at a time as the tree is grown. The knowledge consolidation model (KCM) combines the rules extracted using KBANN (NeuroRule), frequency matrix (which is similar to the Naïve Bayesian technique), and C5.0 algorithm. The KCM can effectively integrate multiple rule sets into one centralized knowledge base. The cumulative rules from other single models can improve overall performance as it can reduce error-term and increase R-square. The key idea in the KCM is to combine a number of classifiers such that the resulting combined system achieves higher classification accuracy and efficiency than the original single classifiers. The aim of KCM is to design a composite system that outperforms any individual classifier by pooling together the decisions of all classifiers. Another advantage of KCM is that it does not need the memory space to store the dataset as only extracted knowledge is necessary in build this integrated model. It can also reduce the costs from storage allocation, memory, and time schedule. In order to verify the feasibility and effectiveness of KCM, personal credit rating dataset provided by a local bank in Seoul, Republic of Korea is used in this study. The results from the tests show that the performance of KCM is superior to that of the other single models such as multiple discriminant analysis, logistic regression, frequency matrix, neural networks, decision trees, and NeuroRule. Moreover, our model is superior to a previous algorithm for the extraction of rules from general neural networks.  相似文献   

15.
Tresp  Volker  Hollatz  Jürgen  Ahmad  Subutai 《Machine Learning》1997,27(2):173-200
There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multi-layer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set.  相似文献   

16.
Jianbo  Lifeng  Xiaojun   《Computers in Industry》2008,59(5):489-501
In many manufacturing processes, some key process parameters (i.e., system inputs) have very strong relationship with the categories (e.g., normal or various faulty products) of finished products (i.e., system outputs). The abnormal changes of these process parameters could result in various categories of faulty products. In this paper, a hybrid learning-based model is developed for on-line intelligent monitoring and diagnosis of the manufacturing processes. In the proposed model, a knowledge-based artificial neural network (KBANN) is developed for monitoring the manufacturing process and recognizing faulty quality categories of the products being produced. In addition, a genetic algorithm (GA)-based rule extraction approach named GARule is developed to discover the causal relationship between manufacturing parameters and product quality. These extracted rules are applied for diagnosis of the manufacturing process, provide guidelines on improving the product quality, and are used to construct KBANN. Therefore, the seamless integration of GARule and KBANN provides abnormal warnings, reveals assignable cause(s), and helps operators optimally set the process parameters. The proposed model is successfully applied to a japing-line, which improves the product quality and saves manufacturing cost.  相似文献   

17.
Shavlik  Jude W. 《Machine Learning》1994,14(3):321-331
Conclusion Connectionist machine learning has proven to be a fruitful approach, and it makes sense to investigate systems that combine the strengths of the symbolic and connectionist approaches to AI. Over the past few years, researchers have successfully developed a number of such systems. This article summarizes one view of this endeavor, a framework that encompasses the approaches of several different research groups. This framework (see Figure 1) views the combination of symbolic and neural learning as a three-stage process: (1) the insertion of symbolic information into a neural network, thereby (partially) determining the topology and initial weight settings of a network, (2) the refinement of this network using a numeric optimization method such as backpropagation, possibly under the guidance of symbolic knowledge, and (3) the extraction of symbolic rules that accurately represent the knowledge contained in a trained network. These three components form an appealing, complete picture—approximately-correct symbolic information in, more-accurate symbolic information out—however, these three stages can be independently studied. In conclusion, the research summarized in this paper demonstrates that combining symbolic and connectionist methods is a promising approach to machine learning.  相似文献   

18.
Maclin  Richard  Shavlik  Jude W. 《Machine Learning》1993,11(2-3):195-215
This article describes a connectionist method for refining algorithms represented as generalized finite-state automata. The method translates the rule-like knowledge in an automaton into a corresponding artificial neural network, and then refines the reformulated automaton by applying backpropagation to a set of examples. This technique for translating an automaton into a network extends the KBANN algorithm, a system that translates a set of prepositional rules into a corresponding neural network. The extended system, FSKBANN, allows one to refine the large class of algorithms that can be represented as state-based processes. As a test, FSKBANN is used to improve the Chou–Fasman algorithm, a method for predicting how globular proteins fold. Empirical evidence shows that the multistrategy approach of FSKBANN leads to a statistically-significantly, more accurate solution than both the original Chou–Fasman algorithm and a neural network trained using the standard approach. Extensive statistics report the types of errors made by the Chou–Fasman algorithm, the standard neural network, and the FSKBANN network.  相似文献   

19.
Extracting Refined Rules from Knowledge-Based Neural Networks   总被引:17,自引:4,他引:13  
Neural networks, despite their empirically proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge must be inserted into a neural network. Second, the network must be refined. Third, the refined knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this article, we propose and empirically evaluate a method for the final, and possibly most difficult, step. Our method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules 1) closely reproduce the accuracy of the network from which they are extracted; 2) are superior to the rules produced by methods that directly refine symbolic rules; 3) are superior to those produced by previous techniques for extracting rules from trained neural networks; and 4) are human comprehensible. Thus, this method demonstrates that neural networks can be used to effectively refine symbolic knowledge. Moreover, the rule-extraction technique developed herein contributes to the understanding of how symbolic and connectionist approaches to artificial intelligence can be profitably integrated.  相似文献   

20.
Communication networks form the backbone of our society. Topology control algorithms optimize the topology of such communication networks. Due to the importance of communication networks, a topology control algorithm should guarantee certain required consistency properties (e.g., connectivity of the topology), while achieving desired optimization properties (e.g., a bounded number of neighbors). Real-world topologies are dynamic (e.g., because nodes join, leave, or move within the network), which requires topology control algorithms to operate in an incremental way, i.e., based on the recently introduced modifications of a topology. Visual programming and specification languages are a proven means for specifying the structure as well as consistency and optimization properties of topologies. In this paper, we present a novel methodology, based on a visual graph transformation and graph constraint language, for developing incremental topology control algorithms that are guaranteed to fulfill a set of specified consistency and optimization constraints. More specifically, we model the possible modifications of a topology control algorithm and the environment using graph transformation rules, and we describe consistency and optimization properties using graph constraints. On this basis, we apply and extend a well-known constructive approach to derive refined graph transformation rules that preserve these graph constraints. We apply our methodology to re-engineer an established topology control algorithm, kTC, and evaluate it in a network simulation study to show the practical applicability of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号