首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 377 毫秒
1.
张晓风  张德平 《计算机科学》2016,43(Z11):486-489, 494
软件缺陷预测是软件可靠性研究的一个重要方向。由于影响软件失效的因素有很多,相互之间关联关系复杂,在分析建模中常用联合分布函数来描述,而实际应用中难以确定,直接影响软件失效预测。基于拟似然估计提出一种软件失效预测方法,通过主成分分析筛选影响软件失效的主要影响因素,建立多因素软件失效预测模型,利用这些影响因素的数字特征(均值函数和方差函数)以及采用拟似然估计方法估计出模型参数,进而对软件失效进行预测分析。基于两个真实数据集Eclipse JDT和Eclipse PDE,与经典Logistic回归和Probit回归预测模型进行实验对比分析,结果表明采用拟似然估计对软件缺陷预测具有可行性,且预测精度均优于这两种经典回归预测模型。  相似文献   

2.
BP神经网络是分析股票数据最流行的工具之一.近期对模式匹配算法的研究表明模式匹配简化了股票趋势预测的复杂度并为股票市场预测提供了一种简单有效的方法.文中分别阐述了BP神经网络和模式匹配识别的原理,并提出将两种算法相结合,建立一个基于BP神经网络和模式匹配识别的股票市场分析和预测系统.这个系统克服了神经网络预测系统目标函数存在局部最小和模式匹配识别预测系统缺少股票价格自身变化特性的缺点,具有两种算法在股票预测应用方面的优势.通过对泰山石油的股价进行分析来测试这个系统.实验结果表明此方法不仅收敛速度快、预测精度高,而且易于操作,具有一定应用价值.  相似文献   

3.
针对燃气负荷数据非线性、非平稳性的特点,本文提出一种基于改进的LMD算法与GRU神经网络的组合预测模型.模型首先利用改进后的LMD算法对燃气负荷数据进行序列分解,改进的LMD方法采用分段牛顿插值法代替传统的滑动平均值法来获得局部均值函数和包络估计函数,改善了传统LMD方法存在的过平滑问题.之后,再将得到的若干PF分量进行小波阈值去噪处理,获得有效的分量数据.最后,利用GRU神经网络分别预测各分量值,将它们相加得到最终的负荷预测值.仿真实验表明,提出的方法与单个GRU神经网络以及结合传统LMD算法的GRU网络相比,预测精度更高.  相似文献   

4.
本文提供了两种网络流量入侵检测的方法和它们的结果对比。这两种方法分别为线性的自回归预测以及非线性的支持向量机预测。本文将给出使用这两种方法在预测网络攻击的夺效性的详细分析。实验证明用支持向量机模型确实改进了对于攻击的识别性能,并且其误警率比AR模型低了很多。此外,与SVM相比较,AR预测模型的计算复杂度要低。  相似文献   

5.
针对模糊系统中规则结论为数值和线性函数的两种表示方式 ,找到了它们的共同点 ,将它们置于同一网络结构中 ,形成规则结论为数值和线性函数 (T -S模型 )的两种模糊神经网络 (FuzzyNeuralNetworks,简称FNN) ,导出了它们的网络模型及其学习算法。并首次将其应用于高强混凝土强度预测和配合比设计中。文章还介绍了一种简单有效地从样本数据中提取模糊规则及确定FNN参数初值的方法。运算结果表明 ,FNN不仅具有很高的预测精度 ,而且网络的结点和权值均具有明确的物理意义 ,可以借此深入分析高强混凝土综合性能与影响它们的因素之间的非线性关系  相似文献   

6.
本文提出了一种基于lasso和elastic net的宽度学习系统(BLS)网络结构稀疏方法, 将标准BLS目标函数中的 L2范数分别替换为lasso和elastic net, 利用这两种正则化技术来约束网络输出权重, 衡量每个网络节点输出权重对 预测的影响程度, 将多余的节点进行剔除, 提高了网络结构的稀疏性. 通过对一些回归数据集进行实验, 可以看到 本文提出的方法在不损失预测精度的前提下, 同时简化了网络结构.  相似文献   

7.
卡尔曼体系下的滤波算法计算框架   总被引:1,自引:0,他引:1  
卡尔曼体系下的滤波算法是指滤波算法中含有基于状态方程的状态预测过程和基于观测方程的状态更新过程.为了便于理解卡尔曼体系下的滤波算法计算过程,从滤波算法计算框架角度对它们分别进行了描述.提出了一个统一的卡尔曼体系下的滤波算法计算框架,该统一计算框架既可用于理解滤波算法计算过程又可用于构造新滤波算法.在统一计算框架中存在两个反馈回路,构造新滤波算法的难点在于确定两个反馈回路中的操作函数.本文以自适应卡尔曼滤波算法(Adaptive Kalman filters,AKF)为例就操作函数选择问题进行了初步探讨,证明了几种探作函数是次优的,这为最终构造一种性能优良的AKF算法奠定了良好的理论基础.  相似文献   

8.
以灰色预测控制理论为基础,采用现代控制理论中的二次型优化原理,以控制力和响应加权最小为目标函数,设计了两种基于灰色预测理论的转子系统振动主动控制方案——灰色 GM(1,1)预测优化控制方案和灰色 Verhuslt 预测优化控制方案.并将该两种方案分别应用于带电磁阻尼器转子轴承系统的转子振动主动控制中,通过数值仿真验证了两种控制方法的有效性,并对两种方法的控振效果进行了比较.  相似文献   

9.
熵函数法与几种优化方法的比较   总被引:3,自引:0,他引:3  
熵函数法(或称极大熵方法)是近些年发展起来的求约束优化问题的一种方法,一些数值例子表明了它的实用性,而且这种方法具有一些良好的性质[1-4,6,7,12].这种方法的想法是将原优化问题化为一个含参数p的无约束光滑优化问题去求解,当参数p充分大时,以无约束优化问题的解作为原问题的近似解.这一点与简单罚函数法十分相似,所以将这二种方法进行比较,有助于对嫡函数法进一步的了解.更主要地我们希望了解这种方法的效率.由于熵函数法与简单罚函数法的相似性,对极大极小问题,可以从理论上对二者进行比较,但无法从理论上将熵函数法与其它一些方法进行比较.所以我们只好通过数值实验对它们进行  相似文献   

10.
集成预测方法及其在灾情预测中的应用研究   总被引:1,自引:1,他引:0  
该文对神经网络和灰色预测两种预测模型进行了介绍,分析了它们的优缺点,并提出了一种集成预测方法,将两种方法集成,应用于对财产保险防灾减损来说最重要的灾情预测,克服了单一算法的不足,使预测的精度得到了提高,取得了良好的效果。  相似文献   

11.
Rätsch  Gunnar  Demiriz  Ayhan  Bennett  Kristin P. 《Machine Learning》2002,48(1-3):189-218
We examine methods for constructing regression ensembles based on a linear program (LP). The ensemble regression function consists of linear combinations of base hypotheses generated by some boosting-type base learning algorithm. Unlike the classification case, for regression the set of possible hypotheses producible by the base learning algorithm may be infinite. We explicitly tackle the issue of how to define and solve ensemble regression when the hypothesis space is infinite. Our approach is based on a semi-infinite linear program that has an infinite number of constraints and a finite number of variables. We show that the regression problem is well posed for infinite hypothesis spaces in both the primal and dual spaces. Most importantly, we prove there exists an optimal solution to the infinite hypothesis space problem consisting of a finite number of hypothesis. We propose two algorithms for solving the infinite and finite hypothesis problems. One uses a column generation simplex-type algorithm and the other adopts an exponential barrier approach. Furthermore, we give sufficient conditions for the base learning algorithm and the hypothesis set to be used for infinite regression ensembles. Computational results show that these methods are extremely promising.  相似文献   

12.
基于随机性统计特征的隐密分析研究   总被引:2,自引:0,他引:2  
隐密分析技术作为检验隐密术性能和挫败非法信息传播的重要手段,其研究日益受到人们的关注。通过对主要隐密分析方法的研究与比较,指出以随机性统计特征为基础的统计攻击是目前的主流,称为RSS分析。描述了RSS分析的基本假设,即加密压缩后的信息具有比载体隐密区域更强的随机性。从基本条件、统计特征、判决规则、隐密信息长度估计等方面分析了一些具有代表性的RSS分析的原理及其各自的优缺点。在此基础上讨论了RSS分析的基本假设普遍存在的问题,并指出应从理论研究和方法创新等方面进行进一步的研究。  相似文献   

13.
14.
《国际计算机数学杂志》2012,89(9):1153-1161
In this article, we carry out a local convergence study for Secant-type methods. Our goal is to enlarge the radius of convergence, without increasing the necessary hypothesis. Finally, some numerical tests and comparisons with early results are analyzed.  相似文献   

15.
微分法是一种重要的光流计算方法.它立足于光流的基本方程,在满足某种平滑约束的假设下,利用计算的方法获得了运动场的估计.虽有完备的数学基础,但同时也有理论上的先天不足.首先,灰度连续性假设很难满足.其次,光流基本方程是不适定的.最后,图像的微分求解是近似的.针对微分法光流的理论缺陷,依据当前的平滑约束方法,改进并实现了一种微分光流模型.将局部约束和全局约束结合起来,通过时空预平滑和多分辨率技术,完成光流的计算.仿真实验表明,方法能在一定程度上解决上述三点不足.  相似文献   

16.
Some properties of optimal thresholds in decentralized detection   总被引:1,自引:0,他引:1  
A decentralized Bayesian hypothesis testing problem is considered. It is analytically demonstrated that for the known signal in the Gaussian noise binary hypothesis problem, when there are two sensors with statistically independent identically distributed Gaussian observations (conditioned on the true hypothesis), there is no loss in optimality in using the same decision rule at both sensors. Also, a multiple hypothesis problem is considered; some structure is analytically established for an optimal set of decision rules  相似文献   

17.
Testing methods are introduced in order to determine whether there is some ‘linear’ relationship between imprecise predictor and response variables in a regression analysis. The variables are assumed to be interval-valued. Within this context, the variables are formalized as compact convex random sets, and an interval arithmetic-based linear model is considered. Then, a suitable equivalence for the hypothesis of linear independence in this model is obtained in terms of the mid-spread representations of the interval-valued variables. That is, in terms of some moments of random variables. Methods are constructed to test this equivalent hypothesis; in particular, the one based on bootstrap techniques will be applicable in a wide setting. The methodology is illustrated by means of a real-life example, and some simulation studies are considered to compare techniques in this framework.  相似文献   

18.
This article introduces the application of equivalence hypothesis testing (EHT) into the Empirical Software Engineering field. Equivalence (also known as bioequivalence in pharmacological studies) is a statistical approach that answers the question "is product T equivalent to some other reference product R within some range $\Updelta$ ?." The approach of “null hypothesis significance test” used traditionally in Empirical Software Engineering seeks to assess evidence for differences between T and R, not equivalence. In this paper, we explain how EHT can be applied in Software Engineering, thereby extending it from its current application within pharmacological studies, to Empirical Software Engineering. We illustrate the application of EHT to Empirical Software Engineering, by re-examining the behavior of experts and novices when handling code with side effects compared to side-effect free code; a study previously investigated using traditional statistical testing. We also review two other previous published data of software engineering experiments: a dataset compared the comprehension of UML and OML specifications, and the last dataset studied the differences between the specification methods UML-B and B. The application of EHT allows us to extract additional conclusions to the previous results. EHT has an important application in Empirical Software Engineering, which motivate its wider adoption and use: EHT can be used to assess the statistical confidence with which we can claim that two software engineering methods, algorithms of techniques, are equivalent.  相似文献   

19.
Three hypotheses are formulated. First, in the “design space” of possible electronic circuits, conventional design methods work within constrained regions, never considering most of the whole. Second, evolutionary algorithms can explore some of the regions beyond the scope of contentional methods, raising the possibility that better designs can be found. Third, evolutionary algorithms can in practice produce designs that are beyond the scope of conventional methods, and that are in some sense better. A reconfigurable hardware controller for a robot is evolved, using a conventional architecture with and without orthodox design constraints. In the unconstrained case, evolution exploited the enhanced capabilities of the hardware. A tone discriminator circuit is evolved on an FPGA without constraints, resulting in a structure and dynamics that are foreign to conventional design and analysis. The first two hypotheses are true. Evolution can explore the forms and processes that are natural to the electronic medium, and nonbehavioral requirements can be integrated into this design process, such as fault tolerance. A strategy to evolve circuit robustness tailored to the task, the circuit, and the medium, is presented. Hardware and software tools enabling research progress are discussed. The third hypothesis is a good working one: practically useful but radically unconventional evolved circuits are in sight  相似文献   

20.
图像分割是面向对象图像分析的基础。目前常规的图像分割算法普遍基于光谱同质性假设,但是这种假设对于提取干旱地区盐田这种具有析出结晶盐与卤水两种高反差地物共存的空间对象而言显得不足为用。针对面向对象图像分析中只采用光谱和形状异质进行图像分割的不足,以吉兰泰盐田及周边地区2008年11月SPOT 5影像为例,首先采用窗口傅立叶变换功率谱方法提取影像纹理特征,然后进行基于纹理、光谱的多尺度分割,进而对分割后图像进行多层次分类来提取盐田信息。实验结果表明,该方法对盐田地区的信息提取有较好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号