首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14篇
  完全免费   5篇
  自动化技术   19篇
  2016年   1篇
  2012年   1篇
  2011年   4篇
  2009年   1篇
  2007年   1篇
  2006年   2篇
  2004年   3篇
  2001年   2篇
  1998年   3篇
  1996年   1篇
排序方式: 共有19条查询结果,搜索用时 62 毫秒
1.
随机神经网络发展现状综述   总被引:4,自引:0,他引:4       下载免费PDF全文
随机神经网络 (RNN)在人工神经网络中是一类比较独特、出现较晚的神经网络 ,它的网络结构、学习算法、状态更新规则以及应用等方面都因此具有自身的特点 .作为仿生神经元数学模型 ,随机神经网络在联想记忆、图像处理、组合优化问题上都显示出较强的优势 .在阐述随机神经网络发展现状、网络特性以及广泛应用的同时 ,专门将RNN分别与Hopfield网络、模拟退火算法和Boltzmann机在组合优化问题上的应用进行了分析对比 ,指出RNN是解决旅行商 (TSP)等问题的有效途径  相似文献
2.
基于Bayesian的期望最大化方法——BEM算法   总被引:3,自引:2,他引:1  
通过对标准EM算法收敛于局部极值的原因进行分析,提出了基于Bayesian方法的神经网络新学习算法--BEM算法。该算法解决了标准EM算法的上述缺陷,同时还可防止标准EM算法Overfitting情况的出现,并可防止标准EM算法有时只响应第一模式而失去泛化能力情况的出现。实验结果表明了该算正确性和有效性。该算法对研究和发展标准EM学习算法理论具有一定的学习意义。  相似文献
3.
用随机神经网络优化求解改进算法的研究   总被引:2,自引:0,他引:2  
随机神经网络是一种仿照实际的生物神经网络的生理机制而定义的网络,其网络结构及应用具有自身的特点。在详细讨论了动态随机神经网络求解典型NP优化问题TSP的算法的同时,特别提出了一种有效改进算法,使得参数在简单选取的情况下保证能量函数的下降,在组合优化问题上具有普遍意义,并且在10城市TSP对改进算法进行验证,指出RNN是解决TSP问题的有效途径。  相似文献
4.
PSO算法全局收敛性分析   总被引:2,自引:0,他引:2       下载免费PDF全文
为了解决PSO算法能否搜索到全局最优解这类主要理论问题,对随机优化算法的全局收敛性准则作了详细解释,并应用此全局收敛性准则对PSO算法的全局收敛性进行了理论分析;指出了PSO算法并不满足随机优化算法的全局收敛性准则应具备的两个条件,并证明了PSO算法是不能保证全局收敛的。  相似文献
5.
基于随机神经网络的多步网络时延预测模型   总被引:2,自引:0,他引:2       下载免费PDF全文
网络时延的动态变化反映了网络路径的负载特征,对时延的精确预测是实施网络拥塞控制、路由选择的重要依据,建立了基于随机神经网络的时延预测模型,该模型克服了传统时间序列预测方法受随机干扰因素影响大、模型结构辨识过程繁琐,以及传统神经网络预测方法易于陷入局部极值、偏离全局最优的缺点.仿真实验表明,在提前单步和多步的预测中该模型比AR模型、RBF神经网络预测算法的准确度更高.  相似文献
6.
Gelenbe has modeled neural networks using an analogy with queuing theory. This model (called Random Neural Network) calculates the probability of activation of the neurons in the network. Recently, Fourneau and Gelenbe have proposed an extension of this model, called multiple classes random neural network model. The purpose of this paper is to describe the use of the multiple classes random neural network model to learn patterns having different colors. We propose a learning algorithm for the recognition of color patterns based upon non-linear equations of the multiple classes random neural network model using gradient descent of a quadratic error function. In addition, we propose a progressive retrieval process with adaptive threshold values. The experimental evaluation shows that the learning algorithm provides good results.  相似文献
7.
8.
王怡雯  丛爽 《计算机仿真》2004,21(11):161-165
基于一种动态随机神经网络(DRNN)求解典型NP优化问题TSP的改进算法,在理论上对DRNN与连续的Hopfiled网络(CHNN)进行了对比研究,指出虽然两种网络均以能量函数表达TSP的最优路径,并通过训练反馈网络求得路径解,但由于两者所用激活函数和收敛条件不同,使得DRNN网络能够接受能量函数的小波动,从而跳出局部最小值达到全局最优;此外,DRNN与CHNN相比网络训练对参数变化不敏感,参数设置简单。最后,通过仿真实验对随机坐标十城市使用两种网络对比路径寻优能力,进一步验证理论分析的结论。揭示RNN网络和CHNN网络在求解TSP时各自的优缺点。  相似文献
9.
In recent years, hydroforming has become the topic of a lot of active research. Researchers have been looking for better procedures and prediction tools to improve the quality of the product and reduce the prototyping cost. Similar to any other metal forming process, hydroforming leads to non-homogeneous plastic deformations of the workpiece. In this paper, a model is developed to predict the amount of deformation caused by hydroforming using random neural networks (RNNs). RNNs learn the behavior of a system from the provided input/output data in a manner similar to the way the human brain does. This is different from the usual connectionist neural network (NN) models which are based on simple functional analyses. Experimental data is collected and used in training as well as testing the RNNs. The RNN models have feedforward architectures and use a generalized learning algorithm in the training process. Multi-layer RNNs with as few as six neurons were used to capture the nonlinear correlations between the input and output data collected from an experimental setup. The RNN models were able to predict the center deflection, the thickness variation, as well as the deformed shape of circular plate specimens with good accuracy. Received: February 2004 / Accepted: September 2005  相似文献
10.
D. Aldous 《Algorithmica》1998,22(4):388-412
Let S(v) be a function defined on the vertices v of the infinite binary tree. One algorithm to seek large positive values of S is the Metropolis-type Markov chain (X n ) defined by for each neighbor w of v , where b is a parameter (``1/temperature") which the user can choose. We introduce and motivate study of this algorithm under a probability model for the objective function S , in which S is a ``tree-indexed simple random walk," that is, the increments (e) = S(w)-S(v) along parent—child edges e = (v,w) are independent and P ( = 1) = p , P( = -1) = 1-p . This algorithm has a ``speed" r(p,b) = lim n n -1 ES(X n ) . We study the speed via a mixture of rigorous arguments, nonrigorous arguments, and Monte Carlo simulations, and compare with a deterministic greedy algorithm which permits rigorous analysis. Formalizing the nonrigorous arguments presents a challenging problem. Mathematically, the subject is in part analogous to recent work of Lyons et al. [19], [20] on the speed on random walk on Galton—Watson trees. A key feature of the model is the existence of a critical point p crit below which the problem is infeasible; we study the behavior of algorithms as p p crit. This provides a novel analogy between optimization algorithms and statistical physics. Received September 8, 1997; revised February 15, 1998.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号