首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 209 毫秒
1.
程序复杂性度量的一种新方法   总被引:6,自引:1,他引:5  
通过分析传统的程序复杂性度量方法的不足之处,首先提出了一种基于程序分解机制的路径复杂性度量方法,然后给出了计算路径复杂度的算法,最后给出了实例。新的度量方法指出了一个程序需要的完全测试路径数目。  相似文献   

2.
电子商务下的信任网络构造与优化   总被引:1,自引:0,他引:1  
电子商务环境中交易实体间的信任关系类似于传统商务环境中复杂的社会关系.实体间的信任度量涉及到交易额、交易发生时间、消费实体个人收入及其对信任的风险态度等因素,难以准确地给出量化计算.为探明这种信任关系的本质特点,结合现实生活中社会关系网络的一些认知理论和方法,详细分析和定义了实体及实体关系的相关属性,提出了一种信任网络描述的形式化模型.研究了信任网络的构造方法,建立了一套信任网络优化算法,有效地降低了信任网络的复杂性.最后,给出了一套信任网络可视化自动生成工具,通过实例应用分析表明,信任网络形式化描述模型和优化算法可以很好地揭示电子商务环境中复杂的信任关系,降低了信任度量算法的复杂度,可为信任的传播机制和信任计算模型的研究提供理论基础.  相似文献   

3.
基于依赖矩阵的构件软件复杂性的度量模型   总被引:2,自引:0,他引:2  
目前的构件软件复杂性度量模型未考虑构件之间不同依赖关系和软件构件内部复杂性两个重要因素,度量结果不够完整、准确.针对该问题,通过将软件体系结构抽象为加权的有向图,获得构件之间的依赖矩阵和影响矩阵,进而获取复杂性的度量公式.从度量公式分析和最后的示例可以得出,该度量模型可以更加真实、准确地反映构件之间不同的依赖关系和构件内部复杂性对软件复杂性的影响,而且具有简单、易于实现等特点.  相似文献   

4.
一种UML类图结构复杂度度量方法   总被引:1,自引:0,他引:1  
对类图结构复杂性进行度量可以辅助对系统概念模型进行质量评价.提出一种改进的赋权类依赖图构造方法,并提出一套基于赋权类依赖图路径分析的类图结构复杂度度量指标体系,该指标体系着眼于类间关系的整体结构.同时给出各项指标的计算方法.  相似文献   

5.
针对企业中解决业务问题时知识资源利用不合理的问题,提出一种面向业务问题求解的知识资源特定领域模型。首先从领域纲要出发,将业务问题求解领域的基本元素描述出来,并详细分析其中对象之间的关系;然后通过在领域模板中建立的问题-知识事件驱动过程链(PK-EPC)模型对问题求解的流程进行格式化,并匹配到相应的知识单元;其次是通过对应的应用模型对业务问题进行分解,并将前面的知识单元与知识载体匹配。根据前面的模型可得出可供企业选择的集成业务活动、知识单元和知识载体的多层次求解方案模型,为企业业务问题求解提供了快捷准确的方法。最后基于Java设计了面向业务问题求解的企业求解方案建模系统,通过实例验证了该模型的可行性。  相似文献   

6.
付晓东  邹平 《计算机应用》2007,27(B06):302-303,307
对类图结构复杂性进行度量可以辅助对系统概念模型进行质量评价。提出一种改进的赋权类依赖图构造方法,并提出一套基于赋权类依赖图路径分析的类图结构复杂度度量指标体系,该指标体系着眼于类间关系的整体结构。同时给出各项指标的计算方法。  相似文献   

7.
面向对象类的复杂性的度量方法   总被引:2,自引:0,他引:2       下载免费PDF全文
简要分析了已有的几种面向对象软件复杂性度量方法,指出了这些方法在反映类复杂性方面存在的不足,运用软件复杂性分解的思想,提出了一种新的类复杂性度量方法。该方法将类复杂性分解为类成员复杂性、类成员关系复杂性和封装复杂性三个复杂性分量,对各分量分别度量,进而得出总复杂性。其中,类成员复杂性采用类所实现的成员的复杂性SIMC、类接口复杂性之和SCIC反映;类成员关系复杂性通过对文中提出的伪二部图进行分析反映;封装复杂性采用成员可见率反映。最后,通过实例验证了该方法的有效性和可行性。  相似文献   

8.
张文  刘刚  朱一凡 《计算机应用研究》2011,28(11):4081-4085
体系结构技术是进行信息系统设计的重要技术,其复杂性直接决定着建设信息系统的复杂性。研究了程序复杂性的度量方法,在复杂性概念研究的基础上,提出了将体系结构复杂性分为动态复杂性和结构复杂性分开进行度量的方法,并提出基于熵的动态复杂性和结构复杂性评估方法。通过对体系结构复杂性度量方法的研究,可以有效地度量体系结构的复杂性,为项目开发与决策提供有力的依据。  相似文献   

9.
粒计算(GranularComputing,简称GrC)是一种新的软计算方法。该文利用信息颗粒的位表示(BitRepresenta-tions)来进行信息系统软规则及其度量之间关系的研究。具体地,首先利用软规则对关联规则、决策规则、函数依赖之间的关系进行了分析,然后对关联规则度量、决策规则度量、外延的函数依赖度量的关系进行了研究,并且建立了这些度量的统一模型。  相似文献   

10.
配置复杂度模型在系统运维中的应用   总被引:1,自引:1,他引:0       下载免费PDF全文
对配置复杂度进行概述,介绍目前的研究现状。提出改进后的配置复杂度模型,通过执行复杂度、参数复杂度、上下文复杂度、交互复杂度和并行复杂度5个指标度量配置信息系统的复杂程度。将该模型应用于实际应用系统的配置过程,寻找配置热点。给出降低复杂度的方法,通过使用XML语言描述配置过程,并且结合Web服务平台,使配置过程的复杂度得到一定程度的降低。  相似文献   

11.
Process modeling languages such as EPCs, BPMN, flow charts, UML activity diagrams, Petri nets, etc., are used to model business processes and to configure process-aware information systems. It is known that users have problems understanding these diagrams. In fact, even process engineers and system analysts have difficulties in grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20% of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate and compare the different metrics. It turns out that our new metric focusing on the structuredness outperforms existing metrics.  相似文献   

12.
Two types of models can assist the information system manager in gaining greater insight into the system development process. They are: isomorphic models that represent cause-effect relationships between certain conditions (e.g., structured techniques) and certain observable states (e.g., productivity change); and paramorphic models that describe an outcome but do not describe the processes or variables that influence the outcome (e.g., estimation of project time or cost). The two models are shown to be interrelated since the relationships of the first model are determinants of the parameters of the second model.IS managers can make significant contributions by developing isomorphic models tailored to their own organizations. However, metrics that measure relevant characteristics of programs and systems are required before substantial progress can be made. Although some initial attempts have been made to develop metris for program quality, program complexity, and programmer skill, much more work remains to be done. In addition, other metries must be developed that will require the involvement of personnel, not only in the computer sciences, but also in information systems, the behavioral sciences, and IS management.  相似文献   

13.
The importance of high data quality and the need to consider data quality in the context of business processes are well acknowledged. Process modeling is mandatory for process-driven data quality management, which seeks to improve and sustain data quality by redesigning processes that create or modify data. A variety of process modeling languages exist, which organizations heterogeneously apply. The purpose of this article is to present a context-independent approach to integrate data quality into the variety of existing process models. The authors aim to improve communication of data quality issues across stakeholders while considering process model complexity. They build on a keyword-based literature review in 74 IS journals and three conferences, reviewing 1,555 articles from 1995 onwards. 26 articles, including 46 process models, were examined in detail. The literature review reveals the need for a context-independent and visible integration of data quality into process models. First, the authors derive the within-model integration, that is, enhancement of existing process models with data quality characteristics. Second, they derive the across-model integration, that is, integration of a data-quality-centric process model with existing process models. Since process models are mainly used for communicating processes, they consider the impact of integrating data quality and the application of patterns for complexity reduction on the models’ complexity metrics. There is need for further research on complexity metrics to improve applicability of complexity reduction patterns. Missing knowledge about interdependency between metrics and missing complexity metrics impede assessment and prediction of process model complexity and thus understandability. Finally, our context-independent approach can be used complementarily to data quality integration focusing on specific process modeling languages.  相似文献   

14.
Business process models which are usually constructed by business designers from experience and analysis are the main guidelines for services composition in the service-oriented architecture (SOA) applications development. However, due to the complexity of business models, it is a challenging task for business process designers to optimize the process models dynamically in accordance with changes in business environments. In this paper, a process-mining-based method is proposed to support business process designers to monitor efficiency or capture the changes of a business process. Firstly, we define a scenario model to depict business elements and their relationships which are critical to business process design. Based on the proposed scenario model, process mining algorithms, including control flow mining, roles mining and data flow mining are carried out in a certain sequence synthetically to extract business scenarios from event logs recorded by SOA application systems. Finally, we implement a prototype using a logistic scenario to illustrate the feasibility of our method in SOA applications development.  相似文献   

15.
面向重构的企业应用系统业务模型   总被引:2,自引:1,他引:1  
在分析面向重构的设计原则的基础上,提出一种基于业务规则的业务模型,并给出八类规则的表达形式。通过对业务元素的持续分解,将模型中频繁发生变化的部分分离出来并使用规则表达,提高了业务模型的动态性和灵活性。最后研究了基于该业务模型的建模过程。  相似文献   

16.
Architecture-level business services are identified based on business processes; and likewise, in service-oriented product lines, identifying the domain architecture-level business services and their variability is preferred to be based on business processes and their variability. Identification of business services for a product line satisfying a set of given design metrics (such as cohesion and coupling) is extremely difficult for a domain architect, since there are many product configurations for which the services must be proper at the same time. This means that the identified services must have proper values for n metrics in m different configurations at the same time. The problem becomes more serious when there are high degrees of variability and complexity embedded in the business processes that are the basis for service identification.We contribute to solve the multi-objective optimization problem of identifying business services for a product line by partitioning the graph of a business process variability model utilizing Non-dominated Sorting Genetic Algorithm-II. The service specification is achieved based on the results of the partitioning. The variability of the services is then determined in terms of mandatory and optional services as well as variability relationships, which are all represented in a Service Variability Model. The method was empirically evaluated through experimentation, and showed proper levels of reusability and variability. Furthermore, the resulting models were fully consistent.  相似文献   

17.
Sun-Jen Huang  Richard Lai 《Software》1998,28(14):1465-1491
Communication software systems have become very large and complex. Recognizing the complexity of such software systems is a key element in their development activities. Software metrics are useful quantitative indicators for assessing and predicting software quality attributes, like complexity. However, most of existing metrics are extracted from source programs at the implementation phase of the software life cycle. They cannot provide early feedback during the specification phase; and subsequently it is difficult and expensive to make changes to the system, if so indicated by the metrics. It is therefore important to be able to measure system complexity at the specification phase. However, most software specifications are written in natural languages from which metrics information is very hard to extract. In this paper, we describe how complexity information can be derived from a formal communication protocol specification written in Estelle so that it is possible to predict the complexity of its implementation and subsequently its development can be better managed. © 1998 John Wiley & Sons, Ltd.  相似文献   

18.
This paper presents an empirical case study that predicted faults in modules based on the total information content of the operators. This metric is closely related to Harrison's average information content classification (AICC), which is the entropy of the operators. Most information theory-based metrics proposed in the literature have not been subjected to empirical predictive studies of real-world software systems. In contrast, this study shows that a simple information theory-based metric can be more useful for prediction of software quality than comparable metrics based on counts in the context of a commercial software development organization.Three models were considered, all based on operators as an abstraction of software. The model based on information content of the operators made more accurate predictions than two similar models based on the number of operators and the number of unique operators. The purpose of this paper is a fair comparison of the three metrics, rather than developing an optimal model. We have long advocated multivariate models for industrial use. The case study considered three large commercial systems, written in assembly language, and developed consecutively by professional programmers. The first system was used to estimate parameters of the models. The subsequent two were used to evaluate the accuracy of model predictions.  相似文献   

19.
Complexity impairs the maintainability and understandability of conceptual models. Complexity metrics have been used in software engineering and business process management (BPM) to capture the degree of complexity of conceptual models. The recent introduction of the Decision Model and Notation (DMN) standard provides opportunities to shift towards the Separation of Concerns paradigm when it comes to modelling processes and decisions. However, unlike for processes, no studies exist that address the representational complexity of DMN decision models. In this paper, we provide an initial set of complexity metrics for DMN models. We gather insights from the process modelling and software engineering fields to propose complexity metrics for DMN decision models. Additionally, we provide an empirical complexity assessment of DMN decision models. For the decision requirements level of the DMN standard 19 metrics were proposed, while 7 metrics were put forward for the decision logic level. For decision requirements, the model size-based metrics, the Durfee Square Metric (DSM) and the Perfect Square Metric (PSM) prove to be the most suitable. For the decision logic level of DMN the Hit Policy Usage (HPU) and the Total Number of Input Variables (TNIV) were evaluated as suitable for measuring DMN decision table complexity.  相似文献   

20.
Goal-oriented Requirements Engineering approaches have become popular in the Requirements Engineering community as they provide expressive modelling languages for requirements elicitation and analysis. However, as a common challenge, such approaches are still struggling when it comes to managing the accidental complexity of their models. Furthermore, those models might be incomplete, resulting in insufficient information for proper understanding and implementation. In this paper, we provide a set of metrics, which are formally specified and have tool support, to measure and analyse complexity and completeness of goal models, in particular social goal models (e.g. i). Concerning complexity, the aim is to identify refactoring opportunities to improve the modularity of those models, and consequently reduce their accidental complexity. With respect to completeness, the goal is to automatically detect model incompleteness. We evaluate these metrics by applying them to a set of well-known system models from industry and academia. Our results suggest refactoring opportunities in the evaluated models, and provide a timely feedback mechanism for requirements engineers on how close they are to completing their models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号