共查询到20条相似文献,搜索用时 0 毫秒
1.
A Minimax Method for Learning Functional Networks 总被引:9,自引:0,他引:9
In this paper, a minimax method for learning functional networks is presented. The idea of the method is to minimize themaximum absolute error between predicted and observed values. In addition, the invertible functions appearing in the modelare assumed to be linear convex combinations of invertible functions. This guarantees the invertibilityof the resulting approximations. The learning method leads to a linear programming problem and then: (a) the solution isobtained in a finite number of iterations, and (b) the global optimum is attained. The method is illustrated withseveral examples of applications, including the Hénon and Lozi series. The results show that the method outperforms standard least squares direct methods. 相似文献
2.
Current analyses of complex biological networks focus either on their global statistical connectivity properties (e.g. topological
path lengths and nodes connectivity ranks) or the statistics of specific local connectivity circuits (motifs). Here we present
a different approach – Functional Topology, to enable identification of hidden topological and geometrical fingerprints of
biological computing networks that afford their functioning – the form-function fingerprints. To do so we represent the network
structure in terms of three matrices: 1. Topological connectivity matrix – each row (i) is the shortest topological path lengths of node i with all other nodes; 2. Topological correlation matrix – the element (i,j) is the correlation between the topological connectivity of nodes (i) and (j); and 3. Weighted graph matrix – in this case the links represent the conductance between nodes that can be simply one over
the geometrical length, the synaptic strengths in case of neural networks or other quantity that represents the strengths
of the connections. Various methods (e.g. clustering algorithms, random matrix theory, eigenvalues spectrum etc.), can be
used to analyze these matrices, here we use the newly developed functional holography approach which is based on clustering
of the matrices following their collective normalization. We illustrate the approach by analyzing networks of different topological
and geometrical properties: 1. Artificial networks, including – random, regular 4-fold and 5-fold lattice and a tree-like
structure; 2. Cultured neural networks: A single network and a network composed of three linked sub-networks; and 3. Model
neural network composed of two overlapping sub-networks. Using these special networks, we demonstrate the method’s ability
to reveal functional topology features of the networks. 相似文献
3.
张春平 《计算机与数字工程》2012,40(10):31-33,39
增量学习是在原有学习成果的基础上,对新信息进行学习,以获取新知识的过程,它要求尽量保持原有的学习成果.文章先简述了基于覆盖的构造型神经网络,然后在此基础上提出了一种快速增量学习算法.该算法在原有网络的分类能力基础上,通过对新样本的快速增量学习,进一步提高网络的分类能力.实验结果表明该算法是有效的. 相似文献
4.
Evolutionary Robotics is a powerful method to generate efficient controllers with minimal human intervention, but its applicability to real-world problems remains a challenge because the method takes long time and it requires software simulations that do not necessarily transfer smoothly to physical robots. In this paper we describe a method that overcomes these limitations by evolving robots for the ability to adapt on-line in few seconds. Experiments show that this method require less generations and smaller populations to evolve, that evolved robots adapt in a few seconds to unpredictable change-including transfers from simulations to physical robots- and display non-trivial behaviors. Robots evolved with this method can be dispatched to other planets and to our homes where they will autonomously and quickly adapt to the specific properties of their environments if and when necessary. 相似文献
5.
提出了一种多项式泛函网络运算新模型,来求解任意数域或环上多项式运算问题。同时给出了基于泛函网络求任意一元多项式倍式的学习算法,而网络的参数利用解线性方程组方法来完成。实验结果表明,这种神经计算方法,相对传统方法,不但能够获得问题的精确解,而且可获得问题的近似解。这给工程计算软件的二次开发提供了有效方法。 相似文献
6.
7.
Rojas I. Pomares H. Gonzáles J. Bernier J. L. Ros E. Pelayo F. J. Prieto A. 《Neural Processing Letters》2000,12(1):1-17
The main architectures, learning abilities and applications of radial basis function (RBF) neural networks are well documented. However, to the best of our knowledge, no in-depth analyses have been carried out into the influence on the behaviour of the neural network arising from the use of different alternatives for the design of an RBF (different non-linear functions, distances, number of neurons, structures, etc.). Thus, as a complement to the existing intuitive knowledge, it is necessary to have a more precise understanding of the significance of the different alternatives. In the present contribution, the relevance and relative importance of the parameters involved in such a design are investigated by using a statistical tool, the ANalysis Of the VAriance (ANOVA). In order to obtain results that are widely applicable, various problems of classification, functional approximation and time series estimation are analyzed. Conclusions are drawn regarding the whole set. 相似文献
8.
基于对目前神经网络存在问题的具体分析,认为将启发性信息引入神经网络训练将是提高网络学习能力\质量以及效率的重要途径。进而讨论了启发知识的来源与种类,将启发性知识分成诱导性约束和强制性约束两类,进而建立了引入网络训练的相应策略,给出了启发性知识引入与选择的具体原则,并建立了两种基于导数关系的启发知识模型。最后建立了神经网络的具体训练算法。具体应用结果证明了所提出策略与方法的有效性。 相似文献
9.
神经网络已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但学习方法的速度不能满足实际需求。传统的误差反向传播方法(BP)主要是基于梯度下降的思想,需要多次迭代;网络的所有参数都需要在训练过程中迭代确定,因此算法的计算量和搜索空间很大。ELM(Extreme Learning Machine,ELM)是一次学习思想使得学习速度提高很多,避免了多次迭代和局部最小值,具有良好的泛化性能、鲁棒性与可控性。但对于不同的数据集和不同的应用领域,无论ELM是用于数据分类或是回归,ELM算法本身还是存在问题,所以本文对已有方法深入对比分析,并指出极速学习方法未来的发展方向。 相似文献
10.
11.
Multilayer Perceptrons (MLPs) use scalar products to compute weighted activation of neurons providing decision borders using combinations of soft hyperplanes. The weighted fun-in activation function may be replaced by a distance function between the inputs and the weights, offering a natural generalization of the standard MLP model. Non-Euclidean distance functions may also be introduced by normalization of the input vectors into an extended feature space. Both approaches influence the shapes of decision borders dramatically. An illustrative example showing these changes is provided. 相似文献
12.
基于集成神经网络入侵检测系统的研究与实现 总被引:1,自引:8,他引:1
为解决传统入侵检测模型所存在的检测效率低,对未知的入侵行为检测困难等问题,对集成学习进行了研究与探讨,提出一种采用遗传算法的集成神经网络入侵检测模型,阐述了模型的工作原理和各模块的主要功能.模型通过遗传算法寻找那些经过训练后差异较大的神经网络进行集成.实验表明,集成神经网络与检测率最好的单个神经网络相比检测率有所提高.同时,该模型采用机器学习方法,可使系统能动态地适应环境,不仅对已知的入侵具有较好的识别能力,而且能识别未知的入侵行为,从而实现入侵检测的智能化. 相似文献
13.
针对Internet上的信息过载问题,提出了一种基于内容分析的信息推荐方法。该方法使用神经网络作为知识表示和推理机制来建立用户兴趣模型,然后以用户模型为基础来预测用户对信息的偏好程度,并据此做出信息推荐。提出的方法通过一个仿真试验进行验证。 相似文献
14.
一类动态递归神经网络的智能控制器 总被引:2,自引:0,他引:2
提出一种改进型动态递归神经网络的自适应控制方法,研究了动态递归网络的学习算法,分析了学习算法的收敛性,并推导了保证算法收敛的有效学习率范围,在此基础上提出了模糊推理自适应学习率方法。计算机仿真实验表明,本文控制方法对于未知、非线性被控对象的控制是有效的。 相似文献
15.
Support-Vector Networks 总被引:722,自引:0,他引:722
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. 相似文献
16.
Stefan Wermter 《Applied Intelligence》2000,12(1-2):27-42
Previously neural networks have shown interesting performance results for tasks such as classification, but they still suffer from an insufficient focus on the structure of the knowledge represented therein. In this paper, we analyze various knowledge extraction techniques in detail and we develop new transducer extraction techniques for the interpretation of recurrent neural network learning. First, we provide an overview of different possibilities to express structured knowledge using neural networks. Then, we analyze a type of recurrent network rigorously, applying a broad range of different techniques. We argue that analysis techniques, such as weight analysis using Hinton diagrams, hierarchical cluster analysis, and principal component analysis may be useful for providing certain views on the underlying knowledge. However, we demonstrate that these techniques are too static and too low-level for interpreting recurrent network classifications. The contribution of this paper is a particularly broad analysis of knowledge extraction techniques. Furthermore, we propose dynamic learning analysis and transducer extraction as two new dynamic interpretation techniques. Dynamic learning analysis provides a better understanding of how the network learns, while transducer extraction provides a better understanding of what the network represents. 相似文献
17.
进化神经网络研究进展 总被引:11,自引:0,他引:11
进化神经网络是将进化算法应用于神经网络的构造、学习而得到的神经网络,具有很强的鲁棒适应性。综述了进化神经网络方法及其应用研究新进展,对研究中出现的一些问题进行了讨论与展望。 相似文献
18.
给出一种求解泛函方程的泛函网络方法,设计了一种泛函网络模型用于逼近一类泛函方程的实根问题,并给出了相应地学习算法.该算法通过求解线性方程组可得到网络参数.相对于传统方法,该方法不但能够快速求出泛函方程的精确解,而且可获得所求泛函方程的近似解.计算机仿真结果表明,该算法可行有效. 相似文献
19.
多项式函数型回归神经网络模型及应用 总被引:2,自引:1,他引:2
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。 相似文献