首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
软件可靠性预测中不同核函数的预测能力评估   总被引:2,自引:0,他引:2  
基于核函数回归估计理论的软件可靠性预测建模引起诸多研究者的兴趣.此类研究中,核函数选择问题尤为重要.然而目前还很少有针对所给软件失效数据进行核函数选择或者构建核函数的工作.在14个常用软件失效数据集上应用配对t-检验对基于核函数理论的软件可靠性预测模型中核函数选择问题进行研究.使用的核函数回归估计方法包括核主成分回归算法、核偏最小二乘回归算法、支持向量回归算法、相关向量回归算法;核函数包括高斯核函数、线性核函数、多项式核函数、柯西核函数、拉普拉斯核函数、对称三角核函数、双曲正割核函数、平方正弦基核函数.实验结果表明:不同类型的核函数在不同数据集上表现差异较大,高斯核函数在所有数据集上表现较为稳定,预测结果最好.  相似文献   

2.
软件可靠性混沌神经网络模型   总被引:2,自引:2,他引:0  
张柯  张德平  汪帅 《计算机科学》2014,41(4):172-177
基于经验模态分解算法、混沌分析和神经网络理论提出了一种软件可靠性建模及预测的混沌神经网络模型。首先应用经验模态分解算法把软件失效数据序列分解成不同尺度的基本模态分量,并在此基础上进一步分析,表明软件失效数据是否存在混沌特性;再经神经网络进行组合预测,提高模型对目标函数的学习能力,有效提高预测精度;最后基于两组真实软件失效数据集,将所提出的方法与基于支持向量回归机以及单纯使用神经网络的软件可靠性预测模型进行比较分析。结果表明,基于混沌分析、结合经验模态分解和神经网络的软件可靠性预测模型具有更为显著的模型拟合能力与精确的预测效果。  相似文献   

3.
考虑软件不同失效过程偏差的软件可靠性模型   总被引:3,自引:0,他引:3  
软件可靠性分析是根据软件失效数据等信息,通过合理建模来对软件可靠性进行预计和评价.现有的基于随机过程的可靠性模型一般采用均值过程来描述软件失效数据,然而,软件失效数据的模型化实质上应该是使其成为某个随机过程的一个样本轨迹.文中建立了考虑软件不同失效过程偏差的软件可靠性模型,用NHPP过程表示失效过程均值函数的变化趋势,ARMA过程表示实际失效过程对均值过程的偏差序列.在两组公开发表的真实数据集上对模型的实验表明,新模型较之一些广泛使用的NHPP软件可靠性模型在拟合能力及适用性上有明显的提高,并且保持了较好的预测能力.  相似文献   

4.
梁宏涛  徐建良  许可 《计算机科学》2016,43(11):257-259
可靠性作为衡量软件质量的一种重要特性,对软件管理具有重要的意义。针对单一核函数的缺陷,提出一种组合核函数相关向量机的软件可靠性预测模型。首先对当前软件可靠性研究现状进行分析,然后采用组合核函数相关向量机对训练集进行学习和建模,最后通过具体实例对模型的预测性能进行分析。结果表明,本模型获得了理想的软件可靠性预测结果,且其预测性能要优于单一核函数模型,在软件可靠性预测中有重要的应用价值。  相似文献   

5.
基于EMD和GEP的软件可靠性预测模型   总被引:1,自引:1,他引:0  
基于经验模态分解和基因表达式编程算法提出了一种软件可靠性预测模型。通过对软件失效数据序列进行经验模态分解得到不同频段的本征模态分量和剩余分量,消除失效数据中的噪声,运用基因表达式编程算法的灵活表达能力,把分解得到的不同频段的各本征模态分量及剩余分量中所对应的不同失效时间序列作为样本来分别进行预测,重构各本征模态分量和剩余分量中相对应的预测结果,将其作为软件失效的最终预测值。基于两组真实软件失效数据集,将所提出的方法与基于支持向量回归机以及单纯使用基因表达式编程的软件可靠性预测模型进行比较分析。结果表明,该软件可靠性预测模型具有更为显著的模型拟合能力与精确的预测效果。  相似文献   

6.
软件可靠性预测以软件可靠性预测模型为基础,对软件的可靠性以及与其直接相关的度量进行分析、评价和预测,利用软件运行中所收集的失效数据对未来的软件可靠性进行预测,成为了评估软件失效行为和保障软件可靠程度的重要手段。BP神经网络结构简单、参数少、易实现,在软件可靠性预测领域已经得到了广泛应用。然而基于传统BP神经网络搭建的软件可靠性预测模型的预测精度无法达到预期目标,因此提出了基于BASFPA-BP的软件可靠性预测模型。该模型利用软件失效数据,在BP神经网络训练过程中利用BASFPA算法优化网络权值、阈值,从而提高模型的预测精度。选用3组公开的软件失效数据,将实际值与预测值的均方误差作为预测结果的衡量标准,同时将BASFPA-BP与FPA-BP,BP,Elman这3种模型进行对比研究。实验结果表明,基于BASFPA-BP的软件可靠性预测模型在同类型模型中实现了较高的预测精度。  相似文献   

7.
软件可靠性模型研究进展   总被引:6,自引:1,他引:5  
软件可靠性模型旨在根据软件失效数据,通过建模给出软件的可靠性估计值或预测值.它不仅是软件可靠性预计、分配、分析与评价的最强有力的工具,而且为改善软件质量提供了指南.对近年来提出的多种不同的软件可靠性模型进行分类剖析,讨论了部分模型的预测能力和适用性,分析了多个模型适用性差的原因,还对未来的研究趋势进行了展望.  相似文献   

8.
通过分析输入域软件可靠性模型和时间域软件可靠性模型的特点,建立一种基于输入域的非参数软件可靠性评估模型,从而克服一般输入域模型评估精度较差、无法预测的缺点.同时提出了基于非参数统计的方法来估计缺陷数和软件失效概率,从而为利用普通软件测试所获得测试数据进行软件可靠性评估提供了一种解决途径,实例验证表明了该评估模型可以较好地对软件可靠性进行评估,给出缺陷数和软件可靠性的合理估计,其估计精度不低于较好的时间域模型.  相似文献   

9.
利用递归最小二乘支持向量机(RLSSVM)构造软件可靠性失效模型,通过失效数据集对模型进行反复训练,提高模型学习能力。模型依据递归计算方法,可动态反映软件可靠性的变化,对软件失效有准确的预测能力。使用模拟退火(SA)算法对RLSSVM的参数进行寻优,得到改进的RLSSVM,实现对模型结构的优化。与常用的非齐次泊松过程模型相比,利用RLSSVM与SA算法构造的可靠性模型具有更好的拟合和预测能力。  相似文献   

10.
张晓风  张德平 《计算机科学》2016,43(Z11):486-489, 494
软件缺陷预测是软件可靠性研究的一个重要方向。由于影响软件失效的因素有很多,相互之间关联关系复杂,在分析建模中常用联合分布函数来描述,而实际应用中难以确定,直接影响软件失效预测。基于拟似然估计提出一种软件失效预测方法,通过主成分分析筛选影响软件失效的主要影响因素,建立多因素软件失效预测模型,利用这些影响因素的数字特征(均值函数和方差函数)以及采用拟似然估计方法估计出模型参数,进而对软件失效进行预测分析。基于两个真实数据集Eclipse JDT和Eclipse PDE,与经典Logistic回归和Probit回归预测模型进行实验对比分析,结果表明采用拟似然估计对软件缺陷预测具有可行性,且预测精度均优于这两种经典回归预测模型。  相似文献   

11.
Smooth relevance vector machine: a smoothness prior extension of the RVM   总被引:2,自引:0,他引:2  
Enforcing sparsity constraints has been shown to be an effective and efficient way to obtain state-of-the-art results in regression and classification tasks. Unlike the support vector machine (SVM) the relevance vector machine (RVM) explicitly encodes the criterion of model sparsity as a prior over the model weights. However the lack of an explicit prior structure over the weight variances means that the degree of sparsity is to a large extent controlled by the choice of kernel (and kernel parameters). This can lead to severe overfitting or oversmoothing—possibly even both at the same time (e.g. for the multiscale Doppler data). We detail an efficient scheme to control sparsity in Bayesian regression by incorporating a flexible noise-dependent smoothness prior into the RVM. We present an empirical evaluation of the effects of choice of prior structure on a selection of popular data sets and elucidate the link between Bayesian wavelet shrinkage and RVM regression. Our model encompasses the original RVM as a special case, but our empirical results show that we can surpass RVM performance in terms of goodness of fit and achieved sparsity as well as computational performance in many cases. The code is freely available. Action Editor: Dale Schuurmans.  相似文献   

12.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   

13.
图像中的异常检测是计算机视觉中非常重要的研究主题, 它可以定义为单分类问题;针对图像数据集的规模大,维度高等特性,一种新的深度卷积自编码器(Convolutional Autoencoder, CAE)与核近似单分类支持向量机(One Class Support Vector Machine, OCSVM)相结合的异常检测模型CAE-OCSVM被提出;模型中的深度卷积自编码器负责学习图像的本质特征表示,然后使用随机傅里叶特征对卷积自编码器学习本质特征进行核近似,核近似后输入线性单类支持向量机进行图像异常检测。核近似技术克服了核学习技术时间复杂度高的问题;同时深度卷积自编码器与核近似单类支持向量机通过梯度下降法实现了端到端的学习;模型的AUC性能在四个公开的图像基准数据集上进行了实验验证,同时模型与其它常用的异常检测模型在不同的异常率的情况下进行了性能对比;实验结果证实CAE-OCSVM模型在四个公开图像数据集上的性能都优于其它异常检测模型,表明了CAE-OCSVM模型更适合大规模高维数据集的异常检测  相似文献   

14.
核函数及其参数的选择决定着核方法的性能。本文基于半监督学习思想,通过构建一个目标函数,利用无标签数据和成对约束信息来优化核函数,使得核函数尽可能适应数据集,从而改善核函数性能。为验证方法的有效性,将其应用于核主成分分析(KPCA)的核函数优化中,在人工数据和UCI数据集上对KPCA提取特征的分类和聚类性能进行评估,实验结果说明提出方法改进了分类和聚类性能。  相似文献   

15.
软件缺陷预测是软件可靠性研究的一个重要方向。基于自组织数据挖掘(GMDH)网络与因果关系检验理论提出了一种软件缺陷预测模型,借鉴Granger检验思想,利用GMDH网络选择与软件失效具有因果关系的度量指标,建立软件缺陷预测模型。该方法从复杂系统建模角度研究软件度量指标与软件缺陷之间的因果关系,可以检验多变量之间在非线性意义上的因果关系。最后基于两组真实软件失效数据集,将所提出的方法与基于Granger因果检验的软件缺陷预测模型进行比较分析。结果表明,基于GMDH因果关系的软件缺陷预测模型比Granger因果检验方法具有更为显著的预测效果。  相似文献   

16.
软件缺陷预测是提升软件质量的有效方法,而软件缺陷预测方法的预测效果与数据集自身的特点有着密切的相关性。针对软件缺陷预测中数据集特征信息冗余、维度过大的问题,结合深度学习对数据特征强大的学习能力,提出了一种基于深度自编码网络的软件缺陷预测方法。该方法首先使用一种基于无监督学习的采样方法对6个开源项目数据集进行采样,解决了数据集中类不平衡问题;然后训练出一个深度自编码网络模型。该模型能对数据集进行特征降维,模型的最后使用了三种分类器进行连接,该模型使用降维后的训练集训练分类器,最后用测试集进行预测。实验结果表明,该方法在维数较大、特征信息冗余的数据集上的预测性能要优于基准的软件缺陷预测模型和基于现有的特征提取方法的软件缺陷预测模型,并且适用于不同分类算法。  相似文献   

17.
Dimensionality reduction is an important preprocessing procedure in computer vision, pattern recognition, information retrieval, and data mining. In this paper we present a kernel method based on approximately harmonic projection (AHP), a recently proposed linear manifold learning method that has an excellent performance in clustering. The kernel matrix implicitly maps the data into a reproducing kernel Hilbert space (RKHS) and makes the structure of data more distinct, which distributes on nonlinear manifold. It retains and extends the advantages of its linear version and keeps the sensitive to the connected components. This makes the method particularly suitable for unsupervised clustering. Besides, this method can cover various classes of nonlinearities with different kernels. We experiment the new method on several well-known data sets to demonstrate its effectiveness. The results show that the new algorithm performs a good job and outperforms other classic algorithms on those data sets.  相似文献   

18.
Unsupervised feature extraction via kernel subspace techniques   总被引:1,自引:0,他引:1  
This paper provides a new insight into unsupervised feature extraction techniques based on kernel subspace models. The data projected onto kernel subspace models are new data representations which might be better suited for classification. The kernel subspace models are always described exploiting the dual form for the basis vectors which requires that the training data must be available even during the test phase. By exploiting an incomplete Cholesky decomposition of the kernel matrix, a computationally less demanding implementation is proposed. Online benchmark data sets allow the evaluation of these feature extraction methods comparing the performance of two classifiers which both have as input either the raw data or the new representations.  相似文献   

19.
Two techniques that analyze prediction accuracy and enhance predictive power of a software reliability model are presented. The u-plot technique detects systematic differences between predicted and observed failure behavior, allowing the recalibration of a software reliability model to obtain more accurate predictions. The perpetual likelihood ratio (PLR) technique compares two models' abilities to predict a particular data source so that the one that has been most accurate over a sequence of predictions can be selected. The application of these techniques is illustrated using three sets of real failure data  相似文献   

20.
The demand for development of good quality software has seen rapid growth in the last few years. This is leading to increase in the use of the machine learning methods for analyzing and assessing public domain data sets. These methods can be used in developing models for estimating software quality attributes such as fault proneness, maintenance effort, testing effort. Software fault prediction in the early phases of software development can help and guide software practitioners to focus the available testing resources on the weaker areas during the software development. This paper analyses and compares the statistical and six machine learning methods for fault prediction. These methods (Decision Tree, Artificial Neural Network, Cascade Correlation Network, Support Vector Machine, Group Method of Data Handling Method, and Gene Expression Programming) are empirically validated to find the relationship between the static code metrics and the fault proneness of a module. In order to assess and compare the models predicted using the regression and the machine learning methods we used two publicly available data sets AR1 and AR6. We compared the predictive capability of the models using the Area Under the Curve (measured from the Receiver Operating Characteristic (ROC) analysis). The study confirms the predictive capability of the machine learning methods for software fault prediction. The results show that the Area Under the Curve of model predicted using the Decision Tree method is 0.8 and 0.9 (for AR1 and AR6 data sets, respectively) and is a better model than the model predicted using the logistic regression and other machine learning methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号