首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Setiono  R. Huan Liu 《Computer》1996,29(3):71-77
Neural networks often surpass decision trees in predicting pattern classifications, but their predictions cannot be explained. This algorithm's symbolic representations make each prediction explicit and understandable. Our approach to understanding a neural network uses symbolic rules to represent the network decision process. The algorithm, NeuroRule, extracts these rules from a neural network. The network can be interpreted by the rules which, in general, preserve network accuracy and explain the prediction process. We based NeuroRule on a standard three layer feed forward network. NeuroRule consists of four phases. First, it builds a weight decay backpropagation network so that weights reflect the importance of the network's connections. Second, it prunes the network to remove irrelevant connections and units while maintaining the network's predictive accuracy. Third, it discretizes the hidden unit activation values by clustering. Finally, it extracts rules from the network with discretized hidden unit activation values  相似文献   

2.
Effective data mining using neural networks   总被引:4,自引:0,他引:4  
Classification is one of the data mining problems receiving great attention recently in the database community. The paper presents an approach to discover symbolic classification rules using neural networks. Neural networks have not been thought suited for data mining because how the classifications were made is not explicitly stated as symbolic rules that are suitable for verification or interpretation by humans. With the proposed approach, concise symbolic rules with high accuracy can be extracted from a neural network. The network is first trained to achieve the required accuracy rate. Redundant connections of the network are then removed by a network pruning algorithm. The activation values of the hidden units in the network are analyzed, and classification rules are generated using the result of this analysis. The effectiveness of the proposed approach is clearly demonstrated by the experimental results on a set of standard data mining test problems  相似文献   

3.
Extracting M-of-N rules from trained neural networks   总被引:4,自引:0,他引:4  
An effective algorithm for extracting M-of-N rules from trained feedforward neural networks is proposed. First, we train a network where each input of the data can only have one of the two possible values, -1 or one. Next, we apply the hyperbolic tangent function to each connection from the input layer to the hidden layer of the network. By applying this squashing function, the activation values at the hidden units are effectively computed as the hyperbolic tangent (or the sigmoid) of the weighted inputs, where the weights have magnitudes that are equal one. By restricting the inputs and the weights to binary values either -1 or one, the extraction of M-of-N rules from the networks becomes trivial. We demonstrate the effectiveness of the proposed algorithm on several widely tested datasets. For datasets consisting of thousands of patterns with many attributes, the rules extracted by the algorithm are simple and accurate.  相似文献   

4.
Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.  相似文献   

5.
焙面包切片区域图像的灰值游程矩阵提取数学统计量参数作为纹理特征,利用神经网络实现对面包品质的分类.由于神经网络是一种黑箱操作,难以对分布在其中的知识进行解释.采用基于对隐层神经元输出值聚类的遗传算法实现了对面包品质分类的规则抽取,实验结果表明该方法具有优良的识别效果.  相似文献   

6.
一种估计前馈神经网络中隐层神经元数目的新方法   总被引:1,自引:0,他引:1  
前馈神经网络中隐层神经元的数目一般凭经验的给出,这种方法往往造成隐单元数目的不足或过甚,从而导致网络存储容量不够或出现学习过拟现象,本研究提出了一种基于信息熵的估计三层前馈神经网络隐结点数目的方法,该方法首先利用训练集来训练具有足够隐单元数目的初始神经网络,然后计算训练集中能被训练过的神经网络正确识别的样本在隐层神经元的激活值,并对其进行排序,计算这些激活值的各种划分的信息增益,从而构造能将整个样本空间正确划分的决策树,最后遍历整棵树寻找重要的相关隐层神经元,并删除冗余无关的其它隐单元,从而估计神经网络中隐层神经元的较佳数目,文章最后以构造用于茶叶品质评定的具有较佳隐单元数目的神经网络为例,介绍本方法的使用,结果表明,本方法能有效估计前馈神经网络的隐单元数目。  相似文献   

7.
本文提出了一种基于域理论的自适应谐振神经网络算法FTART2,算法将自适应谐振理论和域理论的优点有要结合,不需人为设置隐层神经元,学习速度快,精度高。此外,本文不提出了一种从FTART2网络中抽取符号规则的方法。实验结果表明,使用该方法抽取出的符号规则可理解性好,预测精度高,可以很好地描述了FTART2网络的性能。  相似文献   

8.
由神经网络提取规则的一种方法及其应用   总被引:10,自引:1,他引:9  
提出一种由预处理和规则提取两阶段组成的方法从神经网络中提取规则,预处理阶段包含有动态修正、聚类和删枝3部分。动态修正是自动生成或由初始规则集构造出全联接或非全联接网络初步拓扑结构;聚类和删枝分别删截掉不重要或多余的隐含节点和联接,从而可以得到最简洁和规模小的拓扑结构,成为提取规则的基础,提出了规则提取算法并用于已删截好的网络提取规则。该方法应用于美国AD报告中气象云图的数据,提取出规则集,经过测试  相似文献   

9.
A structural implementation of a fuzzy inference system through connectionist network based on MLP with logical neurons connected through binary and numerical weights is considered. The resulting fuzzy neural network is trained using classical backpropagation to learn the rules of inference of a fuzzy system, by adjustment of the numerical weights. For controller design, training is carried out off line in a closed loop simulation. Rules for the fuzzy logic controller are extracted from the network by interpreting the consequence weights as measure of confidence of the underlying rule. The framework is used in a simulation study for estimation and control of a pulp batch digester. The controlled variable, the Kappa number, a measure of lignin content in the pulp, which is not measurable is estimated through temperature and liquor concentration using the fuzzy neural network. On the other hand a fuzzy neural network is trained to control the Kappa number and rules are extracted from the trained network to construct a fuzzy logic controller.  相似文献   

10.
针对模糊规则的自动获取一直是模糊系统的一个瓶颈问题,提出一种基于递阶结构的混合编码遗传算法与进化规划相结合的模糊加权神经网络学习新算法,利用该算法同时优化模糊加权神经网络的结构和参数,最后说明了从网络中提取模糊规则的方法,从而自动获得最优的模糊规则。分析和实验结果表明,本文方法在规则提取和分类准确性等方面比其他方法更好。  相似文献   

11.
基于域理论的自适应谐振神经网络研究   总被引:3,自引:1,他引:2  
周志华  陈兆乾  陈世福 《软件学报》2000,11(11):1451-1459
提出了一种基于域理论的自适应谐振神经网络算法 FTART,有机结合了自适应谐振理论和域理论的优势 ,以一种独特的方式解决了示例间冲突和分类区域的动态扩展 ,不仅不需要手工设置隐层神经元 ,可以还获得了较快的训练速度和较高的预测精度 .同时还提出了一种可以从训练好的 FTART网络中抽取可理解性好、精度高的符号规则的方法 ,即基于统计的产生测试法 .实验结果表明 ,用该方法抽取的符号规则可以较好地描述FTART的功能.  相似文献   

12.
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.  相似文献   

13.
In this paper, a novel classification rule extraction algorithm which has been recently proposed by authors is employed to determine the causes of quality defects in a fabric production facility in terms of predetermined parameters like machine type, warp type etc. The proposed rule extraction algorithm works on the trained artificial neural networks in order to discover the hidden information which is available in the form of connection weights in them. The proposed algorithm is mainly based on a swarm intelligence metaheuristic which is known as Touring Ant Colony Optimization (TACO). The algorithm has a hierarchical structure with two levels. In the first level, a multilayer perceptron type neural network is trained and its weights are extracted. After obtaining the weights, in the second level, the TACO-based algorithm is applied to extract classification rules. The main purpose of the present work is to determine and analyze the most effective parameters on the quality defects in fabric production. The parameters and their levels which give the best quality results are tried to be discovered and evaluated by making use of the proposed algorithm. It is also aimed to compare the accuracy of proposed algorithm with several other rule-based algorithms in order to present its competitiveness.  相似文献   

14.
Extracting Refined Rules from Knowledge-Based Neural Networks   总被引:17,自引:4,他引:13  
Neural networks, despite their empirically proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge must be inserted into a neural network. Second, the network must be refined. Third, the refined knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this article, we propose and empirically evaluate a method for the final, and possibly most difficult, step. Our method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules 1) closely reproduce the accuracy of the network from which they are extracted; 2) are superior to the rules produced by methods that directly refine symbolic rules; 3) are superior to those produced by previous techniques for extracting rules from trained neural networks; and 4) are human comprehensible. Thus, this method demonstrates that neural networks can be used to effectively refine symbolic knowledge. Moreover, the rule-extraction technique developed herein contributes to the understanding of how symbolic and connectionist approaches to artificial intelligence can be profitably integrated.  相似文献   

15.
Due to its ability to handle nonlinear problems, artificial neural networks are applied in several areas of science. However, the human elements are unable to assimilate the knowledge kept in those networks, since such knowledge is implicitly represented by their connections and the respective numerical weights. In recent formal concept analysis, through the FCANN method, it has demonstrated a powerful methodology for extracting knowledge from neural networks. However, depending on the settings used or the number of the neural network variables, the number of formal concepts and consequently of rules extracted from the network can make the process of knowledge and learning extraction impossible. Thus, this paper addresses the application of the JBOS approach to extracted reduced knowledge from the formal contexts extracted by FCANN from the neural network. Thus, providing a small number of formal concepts and rules for the final user, without losing the ability to understand the process learned by the network.  相似文献   

16.
Symbolic interpretation of artificial neural networks   总被引:4,自引:0,他引:4  
Hybrid intelligent systems that combine knowledge-based and artificial neural network systems typically have four phases, involving domain knowledge representation, mapping of this knowledge into an initial connectionist architecture, network training and rule extraction, respectively. The final phase is important because it can provide a trained connectionist architecture with explanation power and validate its output decisions. Moreover, it can be used to refine and maintain the initial knowledge acquired from domain experts. In this paper, we present three rule extraction techniques. The first technique extracts a set of binary rules from any type of neural network. The other two techniques are specific to feedforward networks, with a single hidden layer of sigmoidal units. Technique 2 extracts partial rules that represent the most important embedded knowledge with an adjustable level of detail, while the third technique provides a more comprehensive and universal approach. A rule-evaluation technique, which orders extracted rules based on three performance measures, is then proposed. The three techniques area applied to the iris and breast cancer data sets. The extracted rules are evaluated qualitatively and quantitatively, and are compared with those obtained by other approaches  相似文献   

17.
针对图像视觉特征和情感语义之间的语义鸿沟,以图像纹理为低层特征,通过使用BP神经网络完成了图像低层特征到情感语义的映射;并在精度保持不变的前提下,对训练好的网络模型进行剪枝,最后通过神经网络规则抽取算法将隐含在神经网络模型中的知识转化为易于理解的IF-THEN规则形式。实验验证了方法的有效性和规则的可理解性。  相似文献   

18.
基于神经网络结构学习的知识求精方法   总被引:5,自引:0,他引:5  
知识求精是知识获取中必不可少的步骤.已有的用于知识求精的KBANN(know ledge based artificialneuralnetw ork)方法,主要局限性是训练时不能改变网络的拓扑结构.文中提出了一种基于神经网络结构学习的知识求精方法,首先将一组规则集转化为初始神经网络,然后用训练样本和结构学习算法训练初始神经网络,并提取求精的规则知识.网络拓扑结构的改变是通过训练时采用基于动态增加隐含节点和网络删除的结构学习算法实现的.大量实例表明该方法是有效的  相似文献   

19.
This study addresses the design and the training of a Multi-Layer Perceptron classifier for identification of wood veneer defects from statistical features of wood sub-images. Previous research utilised a neural network structure manually optimised using the Taguchi method with the connection weights trained using the Backpropagation rule. The proposed approach uses the evolutionary Artificial Neural Network Generation and Training (ANNGaT) algorithm to generate the neural network system. The algorithm evolves simultaneously the neural network topology and the weights. ANNGaT optimises the size of the hidden layer(s) of the neural network structure through genetic mutations of the individuals. The number of hidden layers is a system parameter. Experimental tests show that ANNGaT produces highly compact neural network structures capable of accurate and robust learning. The tests show no differences in accuracy between neural network architectures using one and two hidden layers of processing units. Compared to the manual approach, the evolutionary algorithm generates equally performing solutions using considerably smaller architectures. Moreover, the proposed algorithm requires a lower design effort since the process is fully automated.  相似文献   

20.
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks' states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics that either limit the number of clusters that may form during training or limit the exploration of the space of hidden recurrent state neurons. These limitations, while necessary, may lead to decreased fidelity, in which the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input-output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号