首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
杨娟  陆阳  俞磊  方欢 《自动化学报》2012,38(9):1459-1470
在布尔空间中,汉明球突表达了一类结构清晰的布尔函数, 由于其特殊的几何特性,存在线性可分与线性不可分两种空间结构. 剖析汉明球突的逻辑意义对二进神经网络的规则提取十分重要, 然而,从线性可分的汉明球突中提取具有清晰逻辑意义的规则, 以及如何判定非线性可分的汉明球突,并得到其逻辑意义,仍然是二进神经网络研究中尚未很好解决的问题. 为此,本文首先根据汉明球突在汉明图上的几何特性, 采用真节点加权高度排序的方法, 提出对于任意布尔函数是否为汉明球突的判定算法;然后, 在此基础上利用已知结构的逻辑意义, 将汉明球突分解为若干个已知结构的并集,从而得到汉明球突的逻辑意义; 最后,通过实例说明判定任意布尔函数是否为汉明球突的过程, 并相应得到汉明球突的逻辑表达.  相似文献   

2.
二进神经网络中汉明球的逻辑意义及一般判别方法   总被引:3,自引:0,他引:3  
剖析二进神经元的逻辑意义对二进神经网络的规则提取是十分重要的,在布尔空间中,汉明球是一种线性可分的空间结构,如何从汉明球中提取出具有清晰逻辑意义的规则是二进神经网络研究的一个问题,通过对MofN规则表达形式的扩展,分析了汉明球的逻辑意义,提出了表达汉明球逻辑意义的LEM规则和GEM规则方法,并且讨论了汉明球和汉明补球的等价性,另一个重要的结果是证明了二进神经元和汉明球等价的充要条件,从而建立了判别汉明球的一般方法。  相似文献   

3.
二进神经网络的汉明图学习算法   总被引:1,自引:0,他引:1  
华强  郑启伦 《计算机学报》2001,24(11):1150-1155
二进神经网络的几何学习算法ETL必须分隔全部真顶点集合和伪顶点集合,且为一种穷举的算法。该文使用所定义的汉明图和子汉明图,给出了选择核心顶点的依据,组成和扩展子汉明图的方向和步骤,以及一个子汉明图可用一个稳层神经元表示的条件和权值、阈值的计算公式。所提出的二进神经网络汉明图学习算法可用于任意布尔函数;无需穷举并一定收敛,因而是快速的;对文献所举实例取得了较ETL算法结构更为优化的三层前向网络。  相似文献   

4.
二进神经网络中的汉明球突及其线性可分性   总被引:1,自引:0,他引:1  
杨娟  陆阳  黄镇谨  王强 《自动化学报》2011,37(6):737-745
对于二进神经网络,剖析其神经元的逻辑意义对网络的规则提取是十分重要的, 而目前每个神经元所表达的线性结构的逻辑意义仍没有完全解决, 一部分线性函数的结构及其逻辑意义尚不明确. 本文在寻找线性可分结构的过程中,提出了汉明球突的概念, 给出其是否线性可分的判定方法,并得到二进神经元与线性可分的汉明球突等价的充要条件, 从而建立了判别线性可分的汉明球突的一般方法,并通过实例验证了该方法的有效性.  相似文献   

5.
二进神经网络可以完备表达任意布尔函数,但对于孤立节点较多的奇偶校验问题却难以用简洁的网络结构实现。针对该问题,提出了一种实现奇偶校验等孤立节点较多的一类布尔函数的二进神经网络学习算法。该算法首先借助蚁群算法优化选择真节点及伪节点的访问顺序;其次结合几何学习算法,根据优化的节点访问顺序给出扩张分类超平面的步骤,从而减少隐层神经元的数目,同时给出了隐层神经元及输出元的表达形式;最后通过典型实例验证了该算法的有效性。  相似文献   

6.
二进神经网络中笛卡尔球的研究   总被引:2,自引:0,他引:2  
根据两类线性可分结构笛卡尔积的概念,定义了布尔空间中笛卡尔球的概念,证明了笛卡尔球是一类线性可分结构系.此外,还对以布尔空间中任意样本Xc为中心,与Xc之间Hamming距离为1的任意个样本与Xc组成的集合进行了研究,证明了这是一类笛卡尔球.为了对笛卡尔球进行规则提取,文中还分析了笛卡尔球的逻辑意义,建立了二进神经网络中判别笛卡尔球的一般方法,描述了这种判别方法的具体步骤,并通过一个实例说明了在二进神经网络中判别笛卡尔球的过程.  相似文献   

7.
杨娟  陆阳  黄镇谨 《自动化学报》2014,40(7):1472-1480
系统可靠性的计算依赖于各基本单元的0/1分布关系及其构成的布尔逻辑. 本文利用二进神经网络可以完备实现布尔逻辑的特性,提出一种基于二进神经网络的可靠性分析方法. 该方法针对每个二进神经元的输入都是0/1逻辑关系的线性组合这一特点,提出并且证明了0/1分布的线性组合的概率分布函数;建立系统功能与布尔函数间的等价关系,将系统转化为相应的二进神经网络;利用线性组合的概率分布函数,通过逐层计算该二进神经网络的0/1输出概率,解决了一般系统的可靠性计算问题.  相似文献   

8.
杨娟  陆阳  黄镇谨 《计算机科学》2012,39(7):195-199
二进神经网络中每个二进神经元等价于一个线性可分函数,但每个二进神经元所表达的线性可分函数的逻辑意义仍不完全清楚。对此,首先分析了已有的几种线性可分结构系;其次,讨论了其是否覆盖了所有的二进神经元;最后,指出阈值在某些范围内二进神经元所对应的线性可分函数的逻辑意义仍不清楚,这为进一步完善二进神经元的覆盖问题指明了方向。  相似文献   

9.
研究了满足严格雪崩准则布尔函数的性质,证明了雪崩布尔函数的汉明重量只能为偶数,并且得到了雪崩布尔函数的汉明重量之集,给出了不同汉明重量的雪崩布尔函数的构造方法。改进了雪崩布尔函数个数的下界。  相似文献   

10.
设计了一种基于神经网络的多等级报警系统。系统采用了多个传感器与布尔神经网络算法的结合,通过汉明距离扩展化简布尔BP神经网络使其便于硬件实现。神经网络和多等级报警的应用降低了系统的误报率和漏报率,而且系统与网络结合提高了应对多种环境的检测能力。实际测试表明了系统性能稳定,易于实现。  相似文献   

11.
Three general methods for obtaining exact bounds on the probability of overfitting are proposed within statistical learning theory: a method of generating and destroying sets, a recurrent method, and a blockwise method. Six particular cases are considered to illustrate the application of these methods. These are the following model sets of predictors: a pair of predictors, a layer of a Boolean cube, an interval of a Boolean cube, a monotonic chain, a unimodal chain, and a unit neighborhood of the best predictor. For the interval and the unimodal chain, the results of numerical experiments are presented that demonstrate the effects of splitting and similarity on the probability of overfitting.  相似文献   

12.
We consider the generalization error of concept learning when using a fixed Boolean function of the outputs of a number of different classifiers. Here, we take into account the ‘margins’ of each of the constituent classifiers. A special case is that in which the constituent classifiers are linear threshold functions (or perceptrons) and the fixed Boolean function is the majority function. This corresponds to a ‘committee of perceptrons,’ an artificial neural network (or circuit) consisting of a single layer of perceptrons (or linear threshold units) in which the output of the network is defined to be the majority output of the perceptrons. Recent work of Auer et al. studied the computational properties of such networks (where they were called ‘parallel perceptrons’), proposed an incremental learning algorithm for them, and demonstrated empirically that the learning rule is effective. As a corollary of the results presented here, generalization error bounds are derived for this special case that provide further motivation for the use of this learning rule.  相似文献   

13.
ABSTRACT

We propose a novel approach to define Artificial Neural Network(ANN) architecture from Boolean factors. ANNs are a subfield of machine learning applicable to several areas of life. However, defining its architecture for solving a given problem is not formalized and remains an open research problem. Since it is difficult to look into the network and figure out exactly what it has learnt, the complexity of such a technique makes its interpretation more tedious. We propose in this paper to build feedforward ANNs using the optimal factors obtained from the Boolean context representing a data. Since optimal factors completely cover the data and therefore give an explanation to these data, We could give an interpretation to the neurons activation and justify the presence of a neuron in our proposed neural network. We show through experiments and comparisons on the use data sets that this approach provides relatively better results for some key performance measures.  相似文献   

14.
This paper introduces the theory of bi-decomposition of Boolean functions. This approach optimally exploits functional properties of a Boolean function in order to find an associated multilevel circuit representation having a very short delay by using simple two input gates. The machine learning process is based on the Boolean Differential Calculus and is focused on the aim of detecting the profitable functional properties availablefor the Boolean function. For clear understanding the bi-decomposition of completely specifiedBoolean functions is introduced first. Significantly better chance of successare given for bi-decomposition of incompletely specifiedBoolean functions, discussed secondly. The inclusion of the weak bi-decomposition allows to prove the the completeness of the suggested decomposition method. The basic task for machine learning consists of determining the decomposition type and dedicated sets of variables. Lean on this knowledge a complete recursive design algorithm is suggested. Experimental results over MCNC benchmarks show that the bi-decomposition outperforms SIS and other BDD-based decomposition methods interms of area and delay of the resulting circuits with comparableCPU time. By switching from the ON-set/OFF-set model of Boolean function lattices to their upper- and lower-bound model a new view to the bi-decomposition arises. This new form of the bi-decomposition theorymakes a comprehensible generalization of the bi-decomposition to multivalued function possible.  相似文献   

15.
The main issue of the combinatorial approach to overfitting is to obtain computationally efficient formulas for overfitting probabilities. A group-theoretical approach is proposed to simplify derivation of such formulas when the set of predictors has a certain group of symmetries. Examples of the sets are given. The general estimate of overfitting probability is proved for the randomized learning algorithm. It is applied to four model sets of predictors—a layer of the Boolean cube, the Boolean cube, the unimodal chain, and a bundle of monotonic chains.  相似文献   

16.
Multi-label learning deals with the problem where each instance is associated with multiple labels simultaneously. The task of this learning paradigm is to predict the label set for each unseen instance, through analyzing training instances with known label sets. In this paper, a neural network based multi-label learning algorithm named Ml-rbf is proposed, which is derived from the traditional radial basis function (RBF) methods. Briefly, the first layer of an Ml-rbf neural network is formed by conducting clustering analysis on instances of each possible class, where the centroid of each clustered groups is regarded as the prototype vector of a basis function. After that, second layer weights of the Ml-rbf neural network are learned by minimizing a sum-of-squares error function. Specifically, information encoded in the prototype vectors corresponding to all classes are fully exploited to optimize the weights corresponding to each specific class. Experiments on three real-world multi-label data sets show that Ml-rbf achieves highly competitive performance to other well-established multi-label learning algorithms.  相似文献   

17.
Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as and, or, and xor by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.  相似文献   

18.
Boolean Factor Analysis by Attractor Neural Network   总被引:1,自引:0,他引:1  
A common problem encountered in disciplines such as statistics, data analysis, signal processing, textual data representation, and neural network research, is finding a suitable representation of the data in the lower dimension space. One of the principles used for this reason is a factor analysis. In this paper, we show that Hebbian learning and a Hopfield-like neural network could be used for a natural procedure for Boolean factor analysis. To ensure efficient Boolean factor analysis, we propose our original modification not only of Hopfield network architecture but also its dynamics as well. In this paper, we describe neural network implementation of the Boolean factor analysis method. We show the advantages of our Hopfield-like network modification step by step on artificially generated data. At the end, we show the efficiency of the method on artificial data containing a known list of factors. Our approach has the advantage of being able to analyze very large data sets while preserving the nature of the data  相似文献   

19.
A hybrid learning system for image deblurring   总被引:1,自引:0,他引:1  
Min  Mitra   《Pattern recognition》2002,35(12):2881-2894
In this paper we propose a 3-stage hybrid learning system with unsupervised learning to cluster data in the first stage, supervised learning in the middle stage to determine network parameters and finally a decision-making stage using voting mechanism. We take this opportunity to study the role of various supervised learning systems that constitute the middle stage. Specifically, we focus on one-hidden layer neural network with sigmoidal activation function, radial basis function network with Gaussian activation function and projection pursuit learning network with Hermite polynomial as the activation function. These learning systems rank in increasing order of complexity. We train and test each system with identical data sets. Experimental results show that learning ability of a system is controlled by the shape of the activation function when other parameters remain fixed. We observe that clustering in the input space leads to better system performance. Experimental results provide compelling evidences in favor of use of the hybrid learning system and the committee machines with gating network.  相似文献   

20.
For an electrical, mechanical, or hybrid system described diagramatically as a network of interconnected components, fault tree modeling of system reliability as a function of individual component failure probabilities gives rise to logic expressions obtained from the network connections. Application of the method of Boolean differences in the analysis of such Boolean expressions is discussed, and it is shown that the influence of the status of specific components on the reliability of the total system may be investigated by straightforward algebraic operations on the network failure function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号