首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
软件体系结构是软件开发过程的关键制品,应该尽早地分析和评估其质量.目前研究的软件体系结构评估主要集中在基于场景的评估方法,其特点是定性的、主观的、无需专用的体系结构描述语言.本文提出以统一建模语言UML作为软件体系结构描述语言以及度量的软件体系结构的定量评估.针对UML的可视化、多视图、半形式化以及一致地应用在整个软件开发活动的特性,提出一组UML度量,从UML图所表达的信息含量、可视化影响以及图形建模元素之间的关联性这三个方面度量软件体系结构.分析并讨论这组UML度量在评估软件体系结构的规模、复杂性和结构性等质量属性方面的应用.  相似文献   

2.
基于熵的信息系统业务模型复杂性度量   总被引:1,自引:0,他引:1  
业务模型的复杂度决定企业信息系统的复杂度,也对信息系统的重构性能具有很大程度的影响。目前研究多侧重于代码级软件的复杂度度量,而对业务模型的复杂度则关注较少。本文首先给出了企业业务模型的分层体系结构,依据模型实体之间的依赖关系与分解关系将业务模型分解为一组基本模型单元。然后重点提出一种基于熵的模型复杂性度量方法,使用信息熵来描述业务模型的复杂性,通过计算基本模型单元的复杂度递归得到各模型实体、依赖关系的复杂性,进而综合得到模型的复杂性。最后通过实际案例验证了此方法的可行性。该方法为信息系统的设计与构造过程提供了有效的参考与决策依据。  相似文献   

3.
Sun-Jen Huang  Richard Lai 《Software》1998,28(14):1465-1491
Communication software systems have become very large and complex. Recognizing the complexity of such software systems is a key element in their development activities. Software metrics are useful quantitative indicators for assessing and predicting software quality attributes, like complexity. However, most of existing metrics are extracted from source programs at the implementation phase of the software life cycle. They cannot provide early feedback during the specification phase; and subsequently it is difficult and expensive to make changes to the system, if so indicated by the metrics. It is therefore important to be able to measure system complexity at the specification phase. However, most software specifications are written in natural languages from which metrics information is very hard to extract. In this paper, we describe how complexity information can be derived from a formal communication protocol specification written in Estelle so that it is possible to predict the complexity of its implementation and subsequently its development can be better managed. © 1998 John Wiley & Sons, Ltd.  相似文献   

4.
Software Structure Metrics Based on Information Flow   总被引:2,自引:0,他引:2  
Structured design methodologies provide a disciplined and organized guide to the construction of software systems. However, while the methodology structures and documents the points at which design decisions are made, it does not provide a specific, quantitative basis for making these decisions. Typically, the designers' only guidelines are qualitative, perhaps even vague, principles such as "functionality," "data transparency," or "clarity." This paper, like several recent publications, defines and validates a set of software metrics which are appropriate for evaluating the structure of large-scale systems. These metrics are based on the measurement of information flow between system components. Specific metrics are defined for procedure complexity, module complexity, and module coupling. The validation, using the source code for the UNIX operating system, shows that the complexity measures are strongly correlated with the occurrence of changes. Further, the metrics for procedures and modules can be interpreted to reveal various types of structural flaws in the design and implementation.  相似文献   

5.
6.
Given the complexity of many contemporary software systems, it is often difficult to gauge the overall quality of their underlying software components. A potential technique to automatically evaluate such qualitative attributes is to use software metrics as quantitative predictors. In this case study, an aggregation technique based on fuzzy integration is presented that combines the predicted qualitative assessments from multiple classifiers. Multiple linear classifiers are presented with randomly selected subsets of automatically generated software metrics describing components from a sophisticated biomedical data analysis system. The external reference test is a software developer’s thorough assessment of complexity, maintainability, and usability, which is used to assign corresponding quality class labels to each system component. The aggregated qualitative predictions using fuzzy integration are shown to be superior to the predictions from the respective best single classifiers.  相似文献   

7.
The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software.  相似文献   

8.
软件体系结构的评估为提高软件质量、控制系统复杂性提供保证,但多数单纯基于场景或基于度量的评估技术普遍存在度量角度单一的问题.针对该问题,结合场景技术提出了一种支持面向方面软件体系结构的度量方法.通过一组度量指标对体系结构进行度量并量化其结构特征;引入场景更细粒度地刻画体系结构的质量属性,考察场景在体系结构中的映射程度.最后对某保险案例的体系结构设计方案进行对比与选择,验证了方法的可行性及实用价值.两种技术的结合为更全面地进行面向方面软件体系结构的评估提供支持与参考.  相似文献   

9.
ContextIn a large object-oriented software system, packages play the role of modules which group related classes together to provide well-identified services to the rest of the system. In this context, it is widely believed that modularization has a large influence on the quality of packages. Recently, Sarkar, Kak, and Rama proposed a set of new metrics to characterize the modularization quality of packages from important perspectives such as inter-module call traffic, state access violations, fragile base-class design, programming to interface, and plugin pollution. These package-modularization metrics are quite different from traditional package-level metrics, which measure software quality mainly from size, extensibility, responsibility, independence, abstractness, and instability perspectives. As such, it is expected that these package-modularization metrics should be useful predictors for fault-proneness. However, little is currently known on their actual usefulness for fault-proneness prediction, especially compared with traditional package-level metrics.ObjectiveIn this paper, we examine the role of these new package-modularization metrics for determining software fault-proneness in object-oriented systems.MethodWe first use principal component analysis to analyze whether these new package-modularization metrics capture additional information compared with traditional package-level metrics. Second, we employ univariate prediction models to investigate how these new package-modularization metrics are related to fault-proneness. Finally, we build multivariate prediction models to examine the ability of these new package-modularization metrics for predicting fault-prone packages.ResultsOur results, based on six open-source object-oriented software systems, show that: (1) these new package-modularization metrics provide new and complementary views of software complexity compared with traditional package-level metrics; (2) most of these new package-modularization metrics have a significant association with fault-proneness in an expected direction; and (3) these new package-modularization metrics can substantially improve the effectiveness of fault-proneness prediction when used with traditional package-level metrics together.ConclusionsThe package-modularization metrics proposed by Sarkar, Kak, and Rama are useful for practitioners to develop quality software systems.  相似文献   

10.
Future factories will feature strong integration of physical machines and cyber-enabled software, working seamlessly to improve manufacturing production efficiency. In these digitally enabled and network connected factories, each physical machine on the shop floor can have its ‘virtual twin’ available in cyberspace. This ‘virtual twin’ is populated with data streaming in from the physical machines to represent a near real-time as-is state of the machine in cyberspace. This results in the virtualization of a machine resource to external factory manufacturing systems. This paper describes how streaming data can be stored in a scalable and flexible document schema based database such as MongoDB, a data store that makes up the virtual twin system. We present an architecture, which allows third-party integration of software apps to interface with the virtual manufacturing machines. We evaluate our database schema against query statements and provide examples of how third-party apps can interface with manufacturing machines using the VMM middleware. Finally, we discuss an operating system architecture for VMMs across the manufacturing cyberspace, which necessitates command and control of various virtualized manufacturing machines, opening new possibilities in cyber-physical systems in manufacturing.  相似文献   

11.
本文介绍了分层着色Petri网(HCPN)基本原理,并结合局部事件表提出了为复杂制造系统建模的一种方法,此方法可以减少复杂系统建模的复杂性,也为仿真软件体系结构的模块化和层次化设计建立了良好的基础.文中还给出了HCPN在武钢热轧带钢厂产线仿真系统建模中的实现方法.  相似文献   

12.
To respond quickly to the rapidly changing manufacturing environment, it is imperative for the system to have such capabilities as flexibility, adaptability, and reusability. The fractal manufacturing system (FrMS) is a new manufacturing paradigm designed to meet these requirements. To facilitate a dynamic reconfiguration of system elements (i.e., fractals), agents as well as software modules should be self-reconfigurable. Embodiment of a self-reconfigurable manufacturing system can be achieved by using self-reconfigurable software architecture. In this paper, therefore, self-reconfigurable software architecture is designed by conducting the following studies: (1) analysis of functional requirements of a fractal and environmental constraints, (2) design of reconfigurable software architecture especially for a reconfigurable agent, (3) selection of proper techniques to implement software modules, and realization of software architecture equipped with self-reconfigurability. To validate this approach, the designed architecture is applied to the FrMS.  相似文献   

13.
郑志  杨德礼  杨红 《计算机工程》2008,34(10):35-37
基于Agent技术为复杂分布式问题提供了求解方法。软件体系结构是控制软件复杂性、提高软件系统质量、支持软件开发和复用的重要手段之一。软件体系结构设计可用于描述Agent与Agent之间的交互和组织结构的规划,因此Agent系统能从良好的体系结构设计中受益。该文整合了图表句法理论和层次谓词变迁网理论,提出一种形式化建模方法,从抽象层(架构)和实现层(动态行为)两方面来构建Agent系统的软件体系结构。模型具有可验证和追踪性,为Agent系统软件体系结构分析与评估提供了良好的基础。  相似文献   

14.
The paper shows how software metrics can be used to plan and control software projects. Software metrics will be essential if the software industry is to continue growing and developing complex systems. The only way to increase knowledge of the software development and maintenance processes and the final product is to measure them and use the measurements in models for estimating their future behaviour. The emphasis of this paper is on complexity metrics and reliability models, and especially on their use for fault content estimation and control of the development and maintenance processes. Empirical results and guidelines of how to use complexity metrics and reliability models are presented.  相似文献   

15.
A major problem facing manufacturing organisations is how to provide efficient and cost-effective responses to the unpredictable changes taking place in a global market. This problem is made difficult by the complexity of supply chain networks coupled with the complexity of individual manufacturing systems within supply chains. Current systems such as manufacturing execution systems (MES), supply chain management (SCM) systems and enterprise resource planning (ERP) systems do not provide adequate facilities for addressing this problem. This paper presents an approach that would enable manufacturing organisations to dynamically and cost-effectively integrate, optimise, configure, simulate, restructure and control not only their own manufacturing systems but also their supply networks, in a co-ordinated manner to cope with the dynamic changes occurring in a global market. This is realised by a synergy of two emerging manufacturing concepts: Agent-based agile manufacturing systems and e-manufacturing. The concept is to represent a complex manufacturing system and its supply network with an agent-based modelling and simulation architecture and to dynamically generate alternative scenarios with respect to planning, scheduling, configuration and restructure of both the manufacturing system and its supply network based on the coordinated interactions amongst agents.  相似文献   

16.
随着软件的演化,软件规模和复杂性的上升往往会造成代码设计质量的退化,进而导致代码可维护性下降。已有大量软件度量指标用于量化代码设计质量,但是由于数量众多,不同指标体系度量结果可比性较差,使得开发人员难以找到存在设计质量问题的模块。系统调研并整理归类了现有的规模、耦合、内聚、封装、继承和多态等6方面软件度量指标。结合度量指标关系以及实验分析,从每个方面的指标集中发现了与代码设计质量关联度最高的6个指标,从而提出一种面向代码设计质量监控的软件度量指标集。实验表明,该指标集可以有效挑选出存在代码设计质量问题的类,可作为开发人员监控和定位代码设计问题的重点关注指标。  相似文献   

17.
18.
Simulation has been used to evaluate various aspects of manufacturing systems. However, building a simulation model of a manufacturing system is time-consuming and error-prone because of the complexity of the systems. This paper introduces a generic simulation modeling framework to reduce the simulation model build time. The framework consists of layout modeling software and a data-driven generic simulation model. The generic simulation model was developed considering the processing as well as the logistics aspects of assembly manufacturing systems. The framework can be used to quickly develop an integrated simulation model of the production schedule, operation processes and logistics of a system. The framework was validated by developing simulation models of cellular and conveyor manufacturing systems.  相似文献   

19.
ContextArchitecture is fundamental for fulfilling requirements related to the non-functional behavior of a software system such as the quality requirement that response time does not degrade to a point where it is noticeable. Approaches like the Architecture Tradeoff Analysis Method (ATAM) combine qualitative analysis heuristics (e.g. scenarios) for one or more quality metrics with quantitative analyses. A quantitative analysis evaluates a single metric such as response time. However, since quality metrics interact with each other, a change in the architecture can affect unpredictably multiple quality metrics.ObjectiveThis paper introduces a quantitative method that determines the impact of a design change on multiple metrics, thus reducing the risks in architecture design. As a proof of concept, the method is applied on a simulation model of transaction processing in client server architecture.MethodFactor analysis is used to unveil latent (i.e. not directly measurable) quality features represented by new variables that reflect architecture-specific correlations between metrics. Separate Analyses of Variance (ANOVA) are then applied to these variables, for interpreting the tradeoffs detected by factor analysis in terms of the quantified metrics.ResultsThe results for the examined transaction processing architecture show three latent quality features, the corresponding groups of strongly correlated quality metrics and the impact of architecture characteristics on the latent quality features.ConclusionThe proposed method is a systematic way for relating the variability of quality metrics and the implied tradeoffs to specific architecture characteristics.  相似文献   

20.
System reliability has become a main concern during the computer-based system design process. It is one of the most important characteristics of the system quality. The continuous increase of the system complexity makes the reliability evaluation extremely costly. Therefore, there is need to develop new methods with less cost and effort. Furthermore, the system is vulnerable to both software and hardware faults. While the software faults are usually introduced by the programmer either at the design or the implementation stage of the software, the hardware faults are caused by physical phenomena affecting the hardware components, such as environmental perturbations, manufacturing defects, and aging-related phenomena. The software faults can only impact the software components. However, the hardware faults can propagate through the different system layers, and affect both the hardware and the software. This paper discusses the differences between the software testing and the software fault injections techniques used for reliability evaluation. We describe the mutation analysis as a method mainly used in software testing. Then, we detail the fault injection as a technique to evaluate the system reliability. Finally, we discuss how to use software mutation analysis in order to evaluate, at software level, the system reliability against hardware faults. The main advantage of this technique is its usability at early design stage of the system, when the instruction set architecture is not available. Experimental results run to evaluate faults occurring the memory show that the proposed approach significantly reduces the complexity of the system reliability evaluation in terms of time and cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号