首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
软件质量预测建模技术是软件质量评价体系中的关键技术,它能够对用户所关心的软件质量特性进行评价。预测模型常常用来发现度量数据和质量要素两者之间的关系,但二者的关系常常复杂而非线性,传统的建模方法受到限制,而人工神经网络技术是一种对非线性关系的建模方法。  相似文献   

2.
禹翔 《电子测试》2014,(10):96-98
随着科学技术的发展,我国的软件技术研究也取得了突破性的进展。然而,在发展的同时有关软件程序代码的问题也是越来越多,给人们的正常使用上带来了很多的不便,因此有关软件质量的必要性和重要性已经为很多的软件研究人士所意识到。为了融合、扩宽、完善、改进Log与MCCABE软件的度量系统,以16个主要的度量元为基础,进而来对软件的质量进行评价,把评分值分布函数的具体模型构建起来,在确定模型参数的时候对9个不一样型号的软件进行测试,根据不同的度量元权值,综合的评分每个软件,进而以这样的方式对程序代码的质量进行计算。  相似文献   

3.
软件质量直接影响着软件的使用和维护,因此,软件的质量也越来越多地引起开发人员、维护人员、管理人员和用户的重视。对软件质量和软件过程进行定量的分析,即软件度量,是软件工程中的一个重要课题。本文介绍了我们项目组开发的基于TSP的软件质量控制平台SQCP的系统结构,着重介绍了作者提出的软件度量在SQCP平台开发过程中的应用。  相似文献   

4.
The main aim of this paper is to propose a novel set of metrics that measure the quality of the image enhancement of mammographic images in a computer-aided detection framework aimed at automatically finding masses using machine learning techniques. Our methodology includes a novel mechanism for the combination of the metrics proposed into a single quantitative measure. We have evaluated our methodology on 200 images from the publicly available digital database for screening mammograms. We show that the quantitative measures help us select the best suited image enhancement on a per mammogram basis, which improves the quality of subsequent image segmentation much better than using the same enhancement method for all mammograms.  相似文献   

5.
宁德军  叶培根  刘琴  李梅 《电子学报》2018,46(12):2930-2935
开源软件已广泛应用于各软件领域,例如操作系统、容器等,但目前尚没有一种能够综合度量开源软件的方法.我们在用户兴趣度量和开发参与度量的基础上,提出了一种能够克服度量维度单一的局限性的度量方法.基于DM模型、软件生存力模型等相关文献研究和开源软件存储库数据挖掘,通过对项目过程数据进行聚类、主成分分析、回归分析和对开发过程的思考,本文提出一种基于存储库数据和统计学算法的开源软件成功度度量模型.并通过与用户兴趣度量结果和开发参与度量结果对比分析,证明本文的度量模型能够基于可自动无扰采集的存储库数据,更全面的衡量开源软件项目的成功.度量模型可应用于企业选择优质开源项目、学术研究、智能项目推荐等领域.  相似文献   

6.
软件质量是软件开发的重要指标.软件度量是保证软件质量的重要方法。深入分析程序切片技术的发展及其在软件工程各个领域的广泛应用,重点探讨面向对象程序切片技术,并将该技术运用于软件度量领域,实现了软件度量的一种新方法.最后利用实验证明该方法的可行性。  相似文献   

7.
A coverage analysis tool for the effectiveness of software testing   总被引:1,自引:0,他引:1  
This paper describes the software testing and analysis tool, “ATAC (Automatic Test Analysis for C)”, developed as a research instrument to measure the effectiveness of testing data. It is also a tool to facilitate the design and evaluation of test cases during software development. To demonstrate the capability and applicability of ATAC, the authors obtained 12 program versions of a critical industrial application developed in a recent university/industry N-version software project, and used ATAC to analyze and compare coverage of the testing on the program versions. Preliminary results from this investigation show that ATAC is a powerful testing tool to provide testing metrics and quality control guidance for the certification of high quality software components or systems  相似文献   

8.
The last decade marked the first real attempt to turn software development into engineering through the concepts of Component-Based Software Development (CBSD) and Commercial Off-The-Shelf (COTS) components, with the goal of creating high-quality parts that could be joined together to form a functioning system. One of the most critical processes in CBSD is the selection of the software components (from either in-house or external repositories) that fulfill some architectural and user-defined requirements. However, there is currently a lack of quality models and metrics that can help evaluate the quality characteristics of software components during this selection process. This paper presents a set of measures to assess the Usability of software components, and describes the method followed to obtain and validate them.  相似文献   

9.
Wireless Mesh Networks form a wireless backbone that provides ubiquitous Internet access and support of multimedia services. In this scenario, traffic crosses multi-hop paths, through mesh routers and gateways, causing high levels of interference. To address this problem, the use of schemes that introduce routing metrics that take into account the characteristics of the interference have been made to improve the application performance. Given the diversity of interference-aware routing metrics for Wireless Mesh Networks, it is necessary to assess the impact of employing these routing metrics on multimedia traffic performance, and in particular, on video streaming. This paper seeks to fill this gap, by using simulation to evaluate the video streaming performance when the most relevant interference-aware routing metrics are used. The degree of video quality can be evaluated from two perspectives, the network viewpoint and the standpoint of the user perception. At the network level, video streaming quality is assessed through IP measures, that is, throughput, delay, jitter and routing overhead. At the user level, ‘Quality of Experience’ metrics are employed to measure the user perception with regard to the video quality. The evaluation of the performance takes account of outdoor and indoor environments. The results of the simulation study have shown that routing metrics based on the information that detects interference using accurate measures achieve a better network and user perception performance. However, depending on the environment (i.e., whether it is indoor or outdoor), all the routing metrics result in a different performance being achieved. Although interference-aware routing metrics affect the performance of both the network and the user levels, there are some cases where they have less impact on the user level, because the user perception parameters are less influenced by the behaviour of the network.  相似文献   

10.
所有成功的软件组织都将度量作为保证自己管理和技术质量的重要手段,软件成本估计则是软件度量的核心任务。为了提高成本估算的准确性,文章根据特定软件企业中的历史项目数据对基本COCOMO模型进行校准,在具体的参数修正方法上利用对数数据相关算法进行校正,并与其它方法进行了比较,得到了满意的结果。校准后的模型对项目开发成本的预测将会更加准确.从而切实体现COCOMO成本度量工作对于软件项目的指导价值。因此,文章所做的成本估算模型的校准工作,对软件开发企业非常具有实用价值。  相似文献   

11.
In the traditional method, the software quality is measured by various metrics of the software, such as decoupling level (DL), which can be used to predict software defect. However, DL, which treats all the ?les equally, has not taken file importance into consideration. Therefore, a novel software quality metric, named as improved decoupling level (IDL), based on the importance of documents was proposed. First, the PageRank algorithm was used to calculate the importance of ?les to obtain the weights of the dependencies, and then defect prediction models was established by combining the software scale, dependencies, scores and software defects to assess the software quality. Compared to most existing module-based software quality evaluation methods, IDL has similar or even superior performance in the prediction of software quality. The results indicate that IDL measures the importance of each ?le in the software more accurately by combining the PageRank algorithm in DL, which indirectly re?ects the quality of software by predicting the bug information in software and improves the accuracy of prediction result of software bug information.  相似文献   

12.
基于复杂网络的软件复杂性度量研究   总被引:4,自引:0,他引:4  
软件开发者对于日趋复杂的软件系统的理解和控制越来越困难,传统软件工程正接近其复杂性和可扩展性的极限.复杂性使软件开发困难,质量难以保证.复杂网络理论的最新研究成果,为软件复杂性度量提供了新的数学基础.讨论了软件复杂性的形成原因和度量方法,介绍了目前复杂网络与软件复杂性结合的研究工作.探讨了基于复杂网络的软件结构复杂性度量方法,提出一种结合复杂网络和演化算法的软件演化复杂性度量模型.  相似文献   

13.
张位勇  邹北骥 《电子科技》2014,27(4):168-170
为了有效地对软件过程进行度量分析,生产高质量软件产品,提出了一种改进的软件过程质量度量方法。该方法将软件过程分解为软件需求、设计、编码、测试以及运行维护等5个阶段,从共性和个性两个不同的角度将每个阶段分解为若干质量因素,根据多级模糊综合评价方法对各质量因素进行评判,最后通过实例验证了该方法的有效性和合理性。  相似文献   

14.
This work analyzes the categorical metrics usage on a very specific subset of Intelligent Systems: Fuzzy Systems. Several characteristics for such systems must be carefully evaluated when metrics and indicators are defined, in order to consider the fuzzy essence as part of the evaluation result. A set of metrics and indicators are defined and applied to the classical inverted pendulum problem. The paper does not intend to provide an exhaustive analysis of the quality evaluation on soft computing problem. It just presents a way to start the study of quality measure in that area.  相似文献   

15.
Predictive models that incorporate a functional relationship of program error measures with software complexity metrics and metrics based on factor analysis of empirical data are developed. Specific techniques for assessing regression models are presented for analyzing these models. Within the framework of regression analysis, the authors examine two separate means of exploring the connection between complexity and errors. First, the regression models are formed from the raw complexity metrics. Essentially, these models confirm a known relationship between program lines of code and program errors. The second methodology involves the regression of complexity factor measures and measures of errors. These complexity factors are orthogonal measures of complexity from an underlying complexity domain model. From this more global perspective, it is believed that there is a relationship between program errors and complexity domains of program structure and size (volume). Further, the strength of this relationship suggests that predictive models are indeed possible for the determination of program errors from these orthogonal complexity domains  相似文献   

16.
Dependability evaluation is a basic component in assessing the quality of repairable systems. A general model (Op) is presented and is specifically designed for software systems; it allows the evaluation of various dependability metrics, in particular, of availability measures. Op is of the structural type, based on Markov process theory. In particular, Op is an attempt to overcome some limitations of the well-known Littlewood reliability model for modular software. This paper gives the: mathematical results necessary to the transient analysis of this general model; and algorithms that can efficiently evaluate it. More specifically, from the parameters describing the: evolution of the execution process when there is no failure; failure processes together with the way they affect the execution; and recovery process, the results are obtained for the: distribution function of the number of failures in a fixed mission; and dependability metrics which are much more informative than the usual ones in a white-box approach. The estimation procedures of the Op parameters are briefly discussed. Some simple examples illustrate the interest in such a structural view and explain how to consider reliability growth of part of the software with the transformation approach developed by Laprie et al. The complete transient analysis of Op allows discussion of the Poisson approximation by Littlewood for his model  相似文献   

17.
Perceptual quality metrics applied to still image compression   总被引:2,自引:0,他引:2  
We present a review of perceptual image quality metrics and their application to still image compression. The review describes how image quality metrics can be used to guide an image compression scheme and outlines the advantages, disadvantages and limitations of a number of quality metrics. We examine a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models. We highlight some variation in the models for luminance adaptation and the contrast sensitivity function and discuss what appears to be a lack of a general consensus regarding the models which best describe contrast masking and error summation. We identify how the various perceptual components have been incorporated in quality metrics, and identify a number of psychophysical testing techniques that can be used to validate the metrics. We conclude by illustrating some of the issues discussed throughout the paper with a simple demonstration.  相似文献   

18.
Electrocardiograph (ECG) compression techniques are gaining momentum due to the huge database requirements and wide band communication channels needed to maintain high quality ECG transmission. Advances in computer software and hardware enable the birth of new techniques in ECG compression, aiming at high compression rates. In general, most of the introduced ECG compression techniques depend on their evaluation performance on either inaccurate measures or measures targeting random behavior of error. In this paper, a new wavelet-based quality measure is proposed. A new wavelet-based quality measure is proposed. The new approach is based on decomposing the segment of interest into frequency bands where a weighted score is given to the band depending on its dynamic range and its diagnostic significance. A performance evaluation of the measure is conducted quantitatively and qualitatively. Comparative results with existing quality measures show that the new measure is insensitive to error variation, is accurate, and correlates very well with subjective tests.  相似文献   

19.
随着雷达技术的快速发展,雷达软件升级改造日益频繁,软件的可维护性成为衡量雷达软件质量的一项重要指标。文中围绕软件可维护性设计的七项评价标准,结合软件生命周期的各个阶段,系统全面地阐述雷达软件可维护性设计的具体技术措施,提出了综合运用开放式软件架构、软件中间件技术、构件化软件开发及管理、软件参数化设计、基于状态监测与诊断的实时重构技术以及界面软件可用性设计等方法,全面提高雷达软件的可维护性。  相似文献   

20.
Many studies in the software reliability have attempted to develop a model for predicting the faults of a software module because the application of good prediction models provides the optimal resource allocation during the development period. In this paper, we consider the change request data collected from the field test of a large-scale software system and develop statistical models of the software module that incorporate a functional relation between the faults and some software metrics. To this end, we discuss the general aspect of regression method, the problem of multicollinearity and the measures of model evaluation. We consider four possible regression models including two stepwise regression models and two nonlinear models. Four developed models are evaluated with respect to the predictive quality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号