首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
本文针对44个黄酮化合物对醛糖脱氢酶,采用贝叶斯正则化反向传播神经网络构建定量构效关系模型。选取116种与结构相关的分子描述符,通过遗传算法进行变量筛选。建立基于8个变量的活性预测贝叶斯正则化神经网络模型并采用验证集考察其预测性能。在该模型下,黄酮化合物对醛糖脱氢酶抑制活性的实验值和预测值一元相关系数平方(R2)分别为0.94811和0.97789。模型显示黄酮化合物醛糖脱氢酶抑制活性与其结构有密切关系。贝叶斯规整化神经网络结合遗传算法具有良好的预测能力。  相似文献   

2.
针对BP神经网络算法训练过程中出现的过拟合问题,提出了利用一阶原点矩,二阶原点矩,方差和极大似然估计概念的推广来计算L2正则化中正则化参数λ值的方法。该方法通过对算法数据集[X,Y]中的X矩阵进行运算得到的四个λ值,BP神经网络算法训练时通常采用的是贝叶斯正则化方法,贝叶斯正则化方法存在着对先验分布和数据分布依赖等问题,而利用上述概念的推广计算的参数代入L2正则化的方法简便没有应用条件限制;在BP神经网络手写数字识别的实验中,将该方法与贝叶斯正则化方法应用到实验中后的算法识别结果进行比较,正确率提高了1.14-1.50个百分点;因而计算得到的λ值应用到L2正则化方法与贝叶斯正则化方法相比更能使得BP神经网络算法的泛化能力强,证明了该算法的有效性。  相似文献   

3.
一种锂电池SOH估计的KNN-马尔科夫修正策略   总被引:2,自引:0,他引:2  
锂离子电池的健康状态(State of health, SOH)是决定电池使用寿命的关键因素.由于锂电池生产工艺、工作环境和使用习惯等的差异性导致其衰退特性具有较大差异, 因此锂电池SOH难以精确估算.本文采用数据驱动的方式通过对采集的电压数据进行特征提取, 使用贝叶斯正则化神经网络对锂电池SOH进行预测, 同时引入KNN-马尔科夫修正策略对预测结果进行修正.实验结果证明, 贝叶斯正则化算法对锂电池SOH的预测准确度较高, KNN-马尔科夫修正策略提高了预测的精确度和鲁棒性, 组合预测模型对锂电池SOH的平均预测误差小于$1\,\%$, 与采用数据分组处理方法(Group method of data handling, GMDH)、概率神经网络(Probabilistic neural network, PNN)、循环神经网络(Recurrent neural network, RNN)的预测精度进行对比, 该模型的预测精度分别提高了$33.3\,\%$、$48.7\,\%$和$53.1\,\%$.  相似文献   

4.
正则化路径上三步式SVM贝叶斯组合   总被引:1,自引:0,他引:1  
模型组合旨在整合并利用假设空间中多个模型提高学习系统的稳定性和泛化性.针对支持向量机(support vector machine,SVM)模型组合多采用基于样本采样方法构造候选模型集的现状,研究基于正则化路径的SVM模型组合.首先证明SVM模型组合Lh-风险一致性,给出SVM模型组合基于样本的合理性解释.然后提出正则化路径上的三步式SVM贝叶斯组合方法.利用SVM正则化路径分段线性性质构建初始模型集,并应用平均广义近似交叉验证(generalized approximate cross-validation,GACV)模型集修剪策略获得候选模型集.测试或预测阶段,应用最小近邻法确定输入敏感的最终组合模型集,并实现贝叶斯组合预测.与基于样本采样方法不同,三步式SVM贝叶斯组合方法基于正则化路径在整个样本集上构造模型集,训练过程易于实现,计算效率较高.模型集修剪策略可减小模型集规模,提高计算效率和预测性能.实验结果验证了正则化路径上三步式SVM模型组合的有效性.  相似文献   

5.
基于神经网络和灰色模型的非线性预估   总被引:1,自引:3,他引:1  
以某己内酰胺厂磷酸羟胺(HPO)的制备的现场数据为基础,利用贝叶斯正则化神经网络和灰色模型建立了磷酸羟胺中的H+浓度的预测模型;比较了神经网络和灰色模型的差异,并把两者结合起来,建立模型进行预测。最后验证了用神经网络和灰色模型相结合建立起来的磷酸羟胺模型可以迅速有效的预测信息,从而为实现质量指标的实时预估和获取专家系统知识奠定了基础。  相似文献   

6.
模型组合是提高支持向量机泛化性的重要方法,但存在计算效率较低的问题。提出一种基于正则化路径上贝叶斯模型平均的支持向量机模型组合方法,在提高支持向量机泛化性的同时,具有较高的计算效率。基于正则化路径算法建立初始模型集,引入对支持向量机的概率解释。模型的先验可看做是一个高斯过程,模型的后验概率通过贝叶斯公式求得,使用贝叶斯模型平均对模型进行组合。在标准数据集上,实验比较了所提出的模型组合方法与交叉验证及广义近似交叉验证(GACV)方法的性能,验证所提出的模型组合方法的有效性。  相似文献   

7.
李方方  赵英凯 《计算机工程与设计》2007,28(15):3647-3649,3658
贝叶斯理论能够利用样本信息和先验知识,简化预测模型,优化参数.主要介绍了贝叶斯框架下的最小二乘支持向量机算法和贝叶斯正则化神经网络,贝叶斯框架下的最小二乘支持向量机能确定正则化参数和核参数,贝叶斯正则化网络能够自适应的调整网络的复杂度和网络的隐节点个数.以轻柴油的凝点、闪点、95%馏出温度3个关键指标输出为例分别建立了这两种预测模型,并且对结果进行了比较,仿真结果表明贝叶斯框架下的最小二乘支持向量机比贝叶斯正则化网络有更强的泛化能力,而且程序运行速度快,运算精度高.  相似文献   

8.
近年来,卷积神经网络已经广泛应用于计算机视觉各个领域中并取得了显著的效果。正则化方法是卷积神经网络的重要组成部分,它能避免卷积神经网络在模型训练的过程中出现过拟合现象。目前关于卷积神经网络正则化方法的综述较少,且大多缺乏对新提出的正则化方法的总结。首先对卷积神经网络中的正则化方法相关文献进行详细的总结和梳理,将正则化方法分为参数正则化、数据正则化、标签正则化和组合正则化;然后在ImageNet等公开数据集上,基于top-1 accuracy、top-5 accuracy等评价指标,对不同正则化方法的优缺点进行对比分析;最后讨论了卷积神经网络的正则化方法未来的研究趋势和工作方向。  相似文献   

9.
当前基于深度学习的恶意软件检测技术由于模型结构及样本预处理方式不够合理等原因,大多存在泛化性较差的问题,即训练好的恶意软件检测模型对不属于训练样本集的恶意软件或新出现的恶意软件的检出效果较差。提出一种改进的基于深度神经网络(Deep Neural Network,DNN)的恶意软件检测方法,使用多个全连接层构建恶意软件检测模型,并引入定向Dropout正则化方法,在模型训练过程中对神经网络中的权重进行剪枝。在Virusshare和lynx-project样本集上的实验结果表明,与同样基于DNN的恶意软件检测模型DeepMalNet相比,改进方法对恶意PE样本集的平均预测概率提高0.048,对被加壳的正常PE样本集的平均预测概率降低0.64。改进后的方法具有更好的泛化能力,对模型训练样本集外的恶意软件的检测效果更好。  相似文献   

10.
基于贝叶斯正则化神经网络虚拟企业敏捷性评价   总被引:1,自引:0,他引:1       下载免费PDF全文
高敏捷性是虚拟企业适应不断变化的市场必备的素质,如何对它进行准确评价是虚拟企业运行中的重要问题,针对此问题先对虚拟企业及其盟员敏捷性之间的关系分析,然后提出在已知虚拟企业盟员敏捷性的基础上用贝叶斯正则化神经网络来计算虚拟企业的敏捷性,最后通过仿真试验测试了该方法的可行性。实验结果证明与非正则化神经网络相比,贝叶斯正则化神经网络的泛化能力强,评价数据结果稳定。该方法可用于各种规模的虚拟企业评价。  相似文献   

11.
灰色神经网络模型及其应用   总被引:6,自引:0,他引:6  
灰色建模要求的样本点少,不必有较好的分布规律,而且计算量少,操作简便。而BP网络学习样本时,会反馈校正输出的误差,具有并行计算、分布式信息存储、强容错力、自适应学习功能等优点。本文将灰色预测建模和神经网络技术融合起来,建立灰色神经网络模型(GNNM)。提出计算残差序列和新的预测值的公式。用于发酵动力学预测,结果表明,灰色神经网络模型在预测精度方面优于常规灰色模型。该模型的算法概念明确,计算简便,有较高的拟合和预测精度,拓宽了灰色模型的应用范围。  相似文献   

12.
周志华  姜远  陈世福 《计算机学报》2001,24(10):1064-1070
神经网络在发生多点断路故障时,网络中的多个隐层神经元及其相关的连接权同时失效。该文针对陷层神经元可以动态增加的一类前馈神经网络,提出了一种三阶段方法T3。T3先对网络进行一次训练,然后用验证集进行测试以确定网络的故障曲线拐点,在此基础上通过第二次训练自适应地增加冗余隐层神经元。实验表明,T3使用较小的冗余量就可以显著改善网络对多点断路故障的容错性,在网络的容错能力和结构复杂度之间较好地达成了折衷。  相似文献   

13.
Suspicious mass traffic constantly evolves, making network behaviour tracing and structure more complex. Neural networks yield promising results by considering a sufficient number of processing elements with strong interconnections between them. They offer efficient computational Hopfield neural networks models and optimization constraints used by undergoing a good amount of parallelism to yield optimal results. Artificial neural network (ANN) offers optimal solutions in classifying and clustering the various reels of data, and the results obtained purely depend on identifying a problem. In this research work, the design of optimized applications is presented in an organized manner. In addition, this research work examines theoretical approaches to achieving optimized results using ANN. It mainly focuses on designing rules. The optimizing design approach of neural networks analyzes the internal process of the neural networks. Practices in developing the network are based on the interconnections among the hidden nodes and their learning parameters. The methodology is proven best for nonlinear resource allocation problems with a suitable design and complex issues. The ANN proposed here considers more or less 46k nodes hidden inside 49 million connections employed on full-fledged parallel processors. The proposed ANN offered optimal results in real-world application problems, and the results were obtained using MATLAB.  相似文献   

14.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

15.
Most artificial neural networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces location-independent transformations (LITs) as a general strategy for implementing distributed feed forward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents a LIT that supports both the standard (static) multilayer backpropagation network, and backpropagation with dynamic extensions. The complexity of both learning and execution algorithms is O(q(Nlog M)) for a single pattern, where q is the number of weight layers in the original network, N the number of nodes in the widest node layer in the original network, and M is the number of nodes in the transformed network (which is linear in the number hidden nodes in the original network). This paper extends previous work with 2-weight-layer backpropagation networks.  相似文献   

16.
The authors previously proposed a self-organizing Hierarchical Cerebellar Model Articulation Controller (HCMAC) neural network containing a hierarchical GCMAC neural network and a self-organizing input space module to solve high-dimensional pattern classification problems. This novel neural network exhibits fast learning, a low memory requirement, automatic memory parameter determination and highly accurate high-dimensional pattern classification. However, the original architecture needs to be hierarchically expanded using a full binary tree topology to solve pattern classification problems according to the dimension of the input vectors. This approach creates many redundant GCMAC nodes when the dimension of the input vectors in the pattern classification problem does not exactly match that in the self-organizing HCMAC neural network. These redundant GCMAC nodes waste memory units and degrade the learning performance of a self-organizing HCMAC neural network. Therefore, this study presents a minimal structure of self-organizing HCMAC (MHCMAC) neural network with the same dimension of input vectors as the pattern classification problem. Additionally, this study compares the learning performance of this novel learning structure with those of the BP neural network,support vector machine (SVM), and original self-organizing HCMAC neural network in terms of ten benchmark pattern classification data sets from the UCI machine learning repository. In particular, the experimental results reveal that the self-organizing MHCMAC neural network handles high-dimensional pattern classification problems better than the BP, SVM or the original self-organizing HCMAC neural network. Moreover, the proposed self-organizing MHCMAC neural network significantly reduces the memory requirement of the original self-organizing HCMAC neural network, and has a high training speed and higher pattern classification accuracy than the original self-organizing HCMAC neural network in most testing benchmark data sets. The experimental results also show that the MHCMAC neural network learns continuous function well and is suitable for Web page classification.  相似文献   

17.
P.A.  C.  M.  J.C.   《Neurocomputing》2009,72(13-15):2731
This paper proposes a hybrid neural network model using a possible combination of different transfer projection functions (sigmoidal unit, SU, product unit, PU) and kernel functions (radial basis function, RBF) in the hidden layer of a feed-forward neural network. An evolutionary algorithm is adapted to this model and applied for learning the architecture, weights and node typology. Three different combined basis function models are proposed with all the different pairs that can be obtained with SU, PU and RBF nodes: product–sigmoidal unit (PSU) neural networks, product–radial basis function (PRBF) neural networks, and sigmoidal–radial basis function (SRBF) neural networks; and these are compared to the corresponding pure models: product unit neural network (PUNN), multilayer perceptron (MLP) and the RBF neural network. The proposals are tested using ten benchmark classification problems from well known machine learning problems. Combined functions using projection and kernel functions are found to be better than pure basis functions for the task of classification in several datasets.  相似文献   

18.
深度神经网络已经在自动驾驶和智能医疗等领域取得了广泛的应用.与传统软件一样,深度神经网络也不可避免地包含缺陷,如果做出错误决定,可能会造成严重后果.因此,深度神经网络的质量保障受到了广泛关注.然而,深度神经网络与传统软件存在较大差异,传统软件质量保障方法无法直接应用于深度神经网络,需要设计有针对性的质量保障方法.软件缺陷定位是保障软件质量的重要方法之一,基于频谱的缺陷定位方法在传统软件的缺陷定位中取得了很好的效果,但无法直接应用于深度神经网络.在传统软件缺陷定位方法的基础上提出了一种基于频谱的深度神经网络缺陷定位方法 Deep-SBFL.该方法首先通过收集深度神经网络的神经元输出信息和预测结果作为频谱信息;然后将频谱信息进行处理作为贡献信息,以用于量化神经元对预测结果所做的贡献;最后提出了针对深度神经网络缺陷定位的怀疑度公式,基于贡献信息计算深度神经网络中神经元的怀疑度并进行排序,以找出最有可能存在缺陷的神经元.为验证该方法的有效性,以EInspect@n (结果排序列表前n个位置内成功定位的缺陷数)和EXAM (在找到缺陷元素之前必须检查元素的百分比)作为评测指...  相似文献   

19.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

20.
A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号