首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
与硬件一样,软件可靠性也需要建立一些模型来对所设计的软件进行预计和评估.介绍了几种软件可靠性模型的特点,并结合工程实践,研究了工程中广泛应用的NHPP模型.  相似文献   

2.
基于虚拟仪器的汽车综合性能自动检测系统   总被引:1,自引:0,他引:1  
徐志刚  汪文斌  马静 《现代电子技术》2006,29(9):116-118,124
为了缩短汽车综合性能自动检测系统软件的开发周期,提高系统数据处理的精确性、数据通信可靠性以覆远行过程的稳定性,提出了一种全新的基于虚拟仪器技术的汽车综合性能自动检测系统。该系统是一种基于虚拟测控模型的分布式测控系统,硬件设计简单合理.软件采用LabVIEW7.0进行开发.实现了系统的绝大部分功能,充分体现了“软件就是仪器”的设计思想。  相似文献   

3.
网络时滞是影响网络化测控系统性能的重要因素,是当前研究的热点问题。针对网络化测控系统的特点和时滞特性,建立了一种网络化远程测控系统模型,并基于模型分析了网络时滞,论述了运用DMC(Dynamic Ma-trix Control)解决网络时滞的方法。采用了嵌入式实时操作系统、SQlite、MinimumCORBA和网络传感器等技术,实现了网络化远程测控系统。  相似文献   

4.
王鑫  李丽 《信息技术》2013,(7):164-166
提出了一种测控装备仿真训练系统的开发方案,详细介绍了系统的结构设计和实现;利用3DMAX软件构建测控装备的三维仿真模型。利用OGRE图形渲染引擎和VC++.net编程语言对模型进行实时驱动和人机交互控制,实现了用摇杆操作测控装备的功能。系统的开发方案缩短研制周期,节省研发费用,实例证明了具有很好的应用价值。  相似文献   

5.
阐述软件可靠性的重要性。简单介绍了各种软件可靠性模型的特点.结合实际情况,详细介绍了工程中广泛应用的NHPP模型,建立了可以在工程中应用的两种软件可靠性模型。  相似文献   

6.
基于LabVIEW的多任务测控系统设计与实现   总被引:4,自引:3,他引:1  
本论文介绍了多任务实时测控系统。系统采用分布式控制系统结构,将人机交互、数据采集等任务和控制任务分别交由测试计算机和控制计算机完成。该测控系统计算机应用软件是在LabVIEW平台上开发,提出了测控系统的两级多任务调度策略,按照软件工程学的观点对实时多任务测控系统进行了方案设计和实现,提高了测控系统的性能。  相似文献   

7.
9914765基于软件差错相关的可靠性增长模型[刊]/赵玮//西安电子科技大学学报.—1999,26(3).—286~289,296(D)通过考虑软件中差错的相关性,讨论了一种基于NHPP 的新的软件可靠性增长模型,并以 MLLF、AIC和均方误差为准则,将该模型与普通 S 型可靠性增长模型和延迟 S 型可靠性增长模型做了比较,得出了较为满意的结果。参8  相似文献   

8.
《无线电工程》2017,(8):67-70
为满足喷管测控试验需要,提出了一种基于1553B总线的数字化测控系统设计方法,可用于对数字伺服机构进行控制,并提高了喷管的控制精度和稳定性。对喷管测控系统的硬件系统和软件系统设计分别进行了阐述,同时对所实现测控系统的性能进行了验证和测试。实际应用表明,所设计的喷管测控系统具有可靠性高、实时性和精确性好的特点。  相似文献   

9.
软件可靠性模型是定量评估软件质量的重要手段,通过综合分析软件故障检测率及故障引入率建立改进的NHPP软件可靠性模型,并对可靠度进行分析,证明所提模型的有效性,最后通过对现有的数据进行分析,并与经典模型进行比较,证明所提模型的优良性。  相似文献   

10.
分析了线程与进程的关系,研究了LabWindoWs/CVI多线程技术运行机制及其数据保护机制,对利用异步定时器实现的多线程软件与传统单线程软件进行效能差异分析。在某武器系统测控软件的开发中采用了LabWindows/CVI多线程技术,实现了系统的安全性和实时性设计。研究表明多线程技术能够更好地执行并行性任务.提高测控系统性能,在避免阻塞,减少运行时间,增强系统可靠性等方面具有显著优势。  相似文献   

11.
软件测试覆盖率直观地描述了软件测试的程度,现有的基于测试覆盖率的软件可靠性增长模型绝大多数都没有考虑故障的排除效率.论文把软件测试覆盖率和故障排除效率引入到软件可靠性评估过程中,建立了一个既考虑测试覆盖率,又考虑故障排除效率的非齐次泊松过程类软件可靠性增长模型,在一组失效数据上的实验分析表明:对这组失效数据,论文提出的模型比其他一些非齐次泊松过程类模型的拟合效果更好.  相似文献   

12.
We summarize the reliability growth models for hardware and software systems described by a stochastic process, where the underlying stochastic process is assumed to be a nonhomogeneous Poisson process (NHPP). The background of reliability growth modelling based on an NHPP is surveyed. The Duane model, which was first postulated as a reliability growth model and is commonly used, is first explained. Secondly, the Weibull growth and modified Weibull growth models for hardware systems and the exponential type growth and gamma type growth models for error detection for software systems are discussed. The parameter estimates can be obtained by maximum likelihood estimation. Finally, the goodness-of-fit tests based on chi-square, Cramér-von Mises and Kolmogorov-Smirnov statistics are presented for the reliability growth models based on an NHPP.  相似文献   

13.
We develop a moving average non-homogeneous Poisson process (MA NHPP) reliability model which includes the benefits of both time domain, and structure based approaches. This method overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a MA approach to cover both methods, and is expected to improve reliability prediction. This paradigm allows software components to vary in nature, and can account for system structures due to its ability to integrate individual component reliabilities on an execution path. Component-level modeling supports sensitivity analysis to guide future upgrades, and updates. Moreover, the integration capability is a benefit for incremental software development, meaning only the affected portion needs to be re-evaluated instead of the entire package, facilitating software evolution to a higher extent than with other methods. Several experiments on different system scenarios and circumstances are discussed, indicating the usefulness of our approach.  相似文献   

14.
Effect of code coverage on software reliability measurement   总被引:1,自引:0,他引:1  
Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this problem, using both time and code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that over-estimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing  相似文献   

15.
The author studies the Laplace trend test when it is used to detect software reliability growth, and proves its optimality in the frame of the most famous software reliability models. Its intuitive importance is explained, and its statistical properties are established for the five models: Goel-Okumoto, Crow, Musa-Okumoto, Littlewood-Verral, and Moranda. The Laplace test has excellent optimality properties for several models, particularly for nonhomogeneous Poisson processes (NHPPs). It is good in the Moranda model, which is not an NHPP; this justifies entirely the use of this test as a trend test. Nevertheless, the Laplace test is not completely satisfactory because neither its exact statistical-significance level, nor its power are calculable, and nothing can be said about its properties for the Littlewood-Verral method. Consequently, the author suggests that it is always better to check if it has good properties in the model, and to search for other tests whose statistical-significance level and power are calculable  相似文献   

16.
摘要:基于核函数的软件可靠性模型一般对软件失效时间数据与发生在其之前的m次失效时间数据的关系进行建模,着重研究了m取值不同时,其对核函数可靠性模型预测能力的影响。在5个不同类型失效数据集上,采用Mann-Kendall检验观测到m值增大时模型预测能力逐渐下降,说明现时失效时间数据能比较久之前观测的失效时间数据更好地用于预测未来,通过把m的取值划分成几个区间,运用配对T检验进行实验研究,结果表明当m∈{6,7,8,9,10}时,模型能够得到最好的预测性能。  相似文献   

17.
In this paper, we discuss a software reliability growth model with a learning factor for imperfect debugging based on a non-homogeneous Poisson process (NHPP). Parameters used in the model are estimated. An optimal release policy is obtained for a software system based on the total mean profit and reliability criteria. A random software life-cycle is also incorporated in the discussion. Numerical results are presented in the final section.  相似文献   

18.
软件可靠性建模是软件可靠性评估的主要方法之一。现在还没有一个可适用于所有软件项目的通用模型,所以可靠性模型的选择已成为一个重要的研究方向。决策树是数据挖掘的一种算法。文中首先介绍数据挖掘与软件可靠性模型选择的结合应用概念,然后重点分析决策树的生成算法。最后以一组数据仿真决策树生成过程,并验证此方法的可行性和准确性。  相似文献   

19.
吴良清 《电子工程师》2007,33(5):39-41,66
传统的软件可靠性预测主要是概率方法,但其存在假设与实际不符的缺点。利用Bayes网,充分利用专家知识和清晰表达相关因素关系的优点,构建了基于Bayes网的软件可靠性预测模型。该模型不仅考虑软件不完全排错和排错时间,同时把软件可靠性因素也考虑在内,增强了其准确和有效性,并基于BN Tookit软件包以MATLAB语言通过实例给以验证。为弥补MATLAB的GUI设计不方便的缺点,给出了VC和MATLAB混合编程实现软件可靠性预测的系统设计思路。  相似文献   

20.
This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号