首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Modern software systems are usually designed in the Service-Oriented Architecture (SOA), which provides methods for system development and integration of existing, reusable services. Due to the growing com-plexity of such systems, there is a need to design them in a way which enables adaptation to changes in the execution environment. This paper presents the Adaptive ESB framework for adaptive execution of services with the use of statistical models representing knowledge about service execution. Statisti- cal models are exploited in many different areas, but applying them to SOA applications requires specific methods for their identification, updating and processing. A statistical model of service execution represents knowledge of how a complex system behaves as a high-level abstraction of a system related to the problem space.  相似文献   

2.
Savor  T. Seviora  R.E. 《Computer》1998,31(8):68-74
To date, no method has explicitly and cost effectively dealt with failure detection in software systems whose specifications are nondeterministic. In such systems, the specification permits multiple outputs for the same input sequence and system state. Nondeterminism in specifications is advantageous because the specification writer can avoid stating irrelevant behavior as mandatory, freeing the software designer to choose a behavioral alternative that would yield a more desirable implementation. Unfortunately, this flexibility comes at a cost to the failure detection mechanism. It must accommodate all the target system's legal behavioral alternatives and avoid favoring one of them. The article describes a hierarchical supervisor whose failure detection mechanism explicitly addresses systems with nondeterministic specifications. The supervisor, a unit separate from the target system, observes the system's external inputs and outputs and reports any failures. Its hierarchical structure results from splitting the task of identifying the behavioral alternative the target system chooses from the task of checking the details of system behavior. This structure makes it possible to efficiently trade off detection accuracy and computational cost. To evaluate their approach, the authors created a prototype supervisor and used it to supervise the execution of the control program of a small telephone exchange. Results indicate that the hierarchical supervisor can significantly reduce the computational cost of considering the target system's behavioral alternatives. However, although the supervisor's computational cost is significantly reduced, it is still higher than that for the target system  相似文献   

3.
This paper presents COnfECt, a model learning approach, which aims at recovering the functioning of a component-based system from its execution traces. We refer here to non concurrent systems whose internal interactions among components are not observable from the environment. COnfECt is specialised into the detection of components of a black-box system and in the inference of models called systems of labelled transition systems (LTS). COnfECt tries to detect components and their specific behaviours in traces, then it generates LTS for every component discovered, which captures its behaviours. Besides, it synchronises the LTSs together to express the functioning of the whole system. COnfECt relies on machine learning techniques to build models: it uses the notion of correlation among actions in traces to detect component behaviours and exploits a clustering technique to merge similar LTSs and synchronise them. We describe the three steps of COnfECt and the related algorithms in this paper. Then, we present some preliminary experimentations.  相似文献   

4.
In this paper we present an approach for supporting the semi-automated architectural abstraction of architectural models throughout the software life-cycle. It addresses the problem that the design and implementation of a software system often drift apart as software systems evolve, leading to architectural knowledge evaporation. Our approach provides concepts and tool support for the semi-automatic abstraction of architecture component and connector views from implemented systems and keeping the abstracted architecture models up-to-date during software evolution. In particular, we propose architecture abstraction concepts that are supported through a domain-specific language (DSL). Our main focus is on providing architectural abstraction specifications in the DSL that only need to be changed, if the architecture changes, but can tolerate non-architectural changes in the underlying source code. Once the software architect has defined an architectural abstraction in the DSL, we can automatically generate architectural component views from the source code using model-driven development (MDD) techniques and check whether architectural design constraints are fulfilled by these models. Our approach supports the automatic generation of traceability links between source code elements and architectural abstractions using MDD techniques to enable software architects to easily link between components and the source code elements that realize them. It enables software architects to compare different versions of the generated architectural component view with each other. We evaluate our research results by studying the evolution of architectural abstractions in different consecutive versions of five open source systems and by analyzing the performance of our approach in these cases.  相似文献   

5.
Models@ run.time   总被引:2,自引:0,他引:2  
Blair  G. Bencomo  N. France  R.B. 《Computer》2009,42(10):22-27
Runtime adaptation mechanisms that leverage software models extend the applicability of model-driven engineering techniques to the runtime environment. Contemporary mission-critical software systems are often expected to safely adapt to changes in their execution environment. Given the critical roles these systems play, it is often inconvenient to take them offline to adapt their functionality. Consequently, these systems are required, when feasible, to adapt their behavior at runtime with little or no human intervention. A promising approach to managing complexity in runtime environments is to develop adaptation mechanisms that leverage software models, referred to as models@run. time. Work on models@run.time seeks to extend the applicability of models produced in model-driven engineering (MDE) approaches to the runtime environment. Models@run. time is a causally connected self-representation of the associated system that emphasizes the structure, behavior, or goals of the system from a problem space perspective.  相似文献   

6.
7.
Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.  相似文献   

8.
Requirements change both during and after a phase of development for a variety of reasons, including error correction and feature changes. Requirements change management is one of the most complex and difficult problems to deal with in requirements elicitation and tracking. It is generally not understood how a specific change propagates through the specification and into the code. In this paper we capture requirements changes as series of atomic changes in specifications. Using a rigorous specification method called sequence‐based specification, we propose a set of algorithms for managing all possible atomic requirements changes. The algorithms have been formulated within an axiom system for sequence‐based specification and proven for correctness. They have also been implemented in a prototype tool with which users are able to push requirements changes through to changes in specifications, maintain old specifications over time and evolve them into new specifications with the least amount of human interaction and rework. The approach of utilizing state machines to model and manage requirements changes guarantees strong evidence about the correctness and completeness of the proposed theory that will lead to more reliable software in the presence of change, especially with embedded systems and safety‐critical systems. The solution described is general enough for adoption by software and system developers, and well suited for incremental development. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
There is an increasing interest in techniques that support analysis and measurement of fielded software systems. These techniques typically deploy numerous instrumented instances of a software system, collect execution data when the instances run in the field, and analyze the remotely collected data to better understand the system's in-the-field behavior. One common need for these techniques is the ability to distinguish execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which condition a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). To address the limitations of existing approaches, we have developed three techniques for automatically classifying execution data as belonging to one of several classes. In this paper, we introduce our techniques and apply them to the binary classification of passing and failing behaviors. Our three techniques impose different overheads on program instances and, thus, each is appropriate for different application scenarios. We performed several empirical studies to evaluate and refine our techniques and to investigate the trade-offs among them. Our results show that 1) the first technique can build very accurate models, but requires a complete set of execution data; 2) the second technique produces slightly less accurate models, but needs only a small fraction of the total execution data; and 3) the third technique allows for even further cost reductions by building the models incrementally, but requires some sequential ordering of the software instances' instrumentation.  相似文献   

10.
Studies of the sensitivity of Monte Carlo models to changes in the values of their features involve repetitive simulations. Subsequent displays and analyses express variation in simulated outcomes as a function of change in location in multidimensional feature space and provide feedback via inputs for future simulations. A software system has been designed to control execution of the interacting programs required, reducing the need for human intervention. Assumptions about the goals and operation of the system as a whole are isolated within a program which executes component packages using interprogram control and communication mechanisms. The latter allow component programs to operate independently of the sources of data or the execution environment: they may be used separately, or in software systems, such as for sensitivity analysis of a Monte Carlo epidemic model. The methodology contributes to modularity at the level of executable programs and to the plausibility and efficiency of sensitivity studies.  相似文献   

11.
随着Web服务以及Web服务组合应用软件在分布式网络中的广泛应用,Web服务的规模和复杂性也在不断地增加,这使得服务在运行过程中可能产生各种故障,因此对服务系统进行及时的故障诊断与排除越来越重要.为了解决在故障诊断中系统模型不完备和历史数据中存在噪音数据这一实际问题,提出一种基于服务行为模型的行为推断诊断方法.该方法通过加权方式结合多种诊断信息构建服务行为模型,应用隐马尔科夫模型中的解码思想推断出与异常执行序列最匹配的正常执行序列,并与观察序列进行比较,从而发现差异定位服务故障.实验表明,该方法应用包含不同噪音比例的诊断信息进行诊断,其诊断准确性均高于传统的服务故障诊断方法.  相似文献   

12.
ContextThe alignment degree existing between a business process and the supporting software systems strongly affects the performance of the business process execution. Methodologies and tools are needed for detecting the alignment level and keeping a business process aligned with the supporting software systems even when they evolve.ObjectiveThis paper aims to provide an adequate support for managing such a kind of alignment and suggesting evolution actions if misalignment is detected. It proposes an approach including modeling and measuring activities for evaluating the alignment level and suggesting evolution activities, if needed.MethodThe proposed approach is composed of three main phases. The first phase regards the modeling of business process and software systems supporting it by applying a modeling notation based on UML and adequately extended for representing business processes. The second phase concerns the evaluation of the alignment degree through the assessment of a set of metrics codifying the alignment concept. Finally, the last phase analyses the evaluation results for suggesting evolution activities if misalignment is detected.ResultsThe paper analyses the application of the proposed approach to a case study regarding a working business process and related software system. The obtained results provided useful suggestion for evolving the supporting software system and improving the alignment level existing between them and the supported business process.ConclusionThe approach contributes in all phases of the process and software system evolution, even if its improvement can be needed for identifying the impact of the changes. The proposed approach facilitates the understanding of business processes, software systems and related models. This favors the interaction of the software and business analysts, as it was possible to better formulate the interviews to be conducted with regard to the objectives and, thus, to collect the required data.  相似文献   

13.
Software Quality Journal - Self-adaptive systems dynamically change their structure and behavior in response to changes in their execution environment to ensure the quality of the services they...  相似文献   

14.
Specification theories as a tool in model-driven development processes of component-based software systems have recently attracted a considerable attention. Current specification theories are however qualitative in nature, and therefore fragile in the sense that the inevitable approximation of systems by models, combined with the fundamental unpredictability of hardware platforms, makes it difficult to transfer conclusions about the behavior, based on models, to the actual system. Hence this approach is arguably unsuited for modern software systems. We propose here the first specification theory which allows to capture quantitative aspects during the refinement and implementation process, thus leveraging the problems of the qualitative setting. Our proposed quantitative specification framework uses weighted modal transition systems as a formal model of specifications. These are labeled transition systems with the additional feature that they can model optional behavior which may or may not be implemented by the system. Satisfaction and refinement is lifted from the well-known qualitative to our quantitative setting, by introducing a notion of distances between weighted modal transition systems. We show that quantitative versions of parallel composition as well as quotient (the dual to parallel composition) inherit the properties from the Boolean setting.  相似文献   

15.
16.
The development of new Web services through the composition of existing ones has gained a considerable momentum as a means to realise business-to-business collaborations. Unfortunately, given that services are often developed in an ad hoc fashion using manifold technologies and standards, connecting and coordinating them in order to build composite services is a delicate and time-consuming task. In this paper, we describe the design and implementation of a system in which services are composed using a model-driven approach, and the resulting composite services are orchestrated following a peer-to-peer paradigm. The system provides tools for specifying composite services through statecharts, data conversion rules, and multi-attribute provider selection policies. These specifications are interpreted by software components that interact in a peer-to-peer way to coordinate the execution of the composite service. We report results of an experimental evaluation showing the relative advantages of this peer-to-peer approach with respect to a centralised one.  相似文献   

17.
一种自动推断复杂系统层次结构任务模型的方法   总被引:1,自引:0,他引:1  
支撑Internet服务的复杂系统难于调试与分析.理解系统运行时行为是调试与分析这些复杂系统的关键.现有的技术将系统动态运行时行为用因果执行路径抽象描述,并在此基础上分析系统的行为.但是这些方法或者需要手动标注系统代码,或者需要使用者描述系统的执行结构,都需要使用者很多人工辅助.文中描述了一种自动推断复杂系统层次结构任务模型的方法.通过使用插装技术动态观察系统执行过程,文中的方法能够根据一组启发自动推断出系统运行时的任务模型,包括任务的边界和任务之间的因果依赖关系.通过使用聚类方法,能够进一步推断出任务模型的层次结构.通过在实际系统(Apache和PacificA)上应用推断方法,可以看出,使用得到的模型能够帮助理解系统的动态运行过程,并帮助分析解决系统的性能问题.  相似文献   

18.
To simulate time-constrained operations and scheduling for Network-on-Chip (NoC) systems, we introduce a new set of component specifications at flit level grounded in Action-Level Real-Time DEVS formalism. These models capture the dynamics of NoC systems through action-based behavior under strict execution time intervals. These DEVS-based models are well-suited for development and simulation of asynchronous NoC architectures. This is achieved by extending the DEVS-Suite simulator to support real-time executions of ALRT-DEVS models. Representative simulation models capturing structure and behavior of prototypical Mesh NoC systems are developed. A set of experiments are designed, implemented, executed, and analyzed to show the kind of real-time simulation capabilities that can be achieved for Network-on-Chip systems.  相似文献   

19.
This paper describes a prototype Knowledge-Based Software Engineering Environment used to demonstrate the concepts of reuse of software requirements and software architectures. The prototype environment, which is application-domain independent, is used to support the development of domain models and to generate target system specifications from them. The prototype environment consists of an integrated set of commercial-off-the-shelf software tools and custom developed software tools.The concept of reuse is prevalent at several levels of the domain modeling method and prototype environment. The environment itself is domain-independent thereby supporting the specification of diverse application domain models. The domain modeling method specifies a family of systems rather than a single system; features characterize the variations in functional requirements supported by the family and individual family members are specified by the features they are to support. The knowledge-based approach to target system generation provides the rules for generating target system specifications from the domain model; target system specifications, themselves, may be stored in an object repository for subsequent retrieval and reuse.  相似文献   

20.
The mobile agents create a new paradigm for data exchange and resource sharing in rapidly growing and continually changing computer networks. In a distributed system, failures can occur in any software or hardware component. A mobile agent can get lost when its hosting server crashes during execution, or it can get dropped in a congested network. Therefore, survivability and fault tolerance are vital issues for deploying mobile-agent systems. This fault tolerance approach deploys three kinds of cooperating agents to detect server and agent failures and recover services in mobile-agent systems. An actual agent is a common mobile agent that performs specific computations for its owner. Witness agents monitor the actual agent and detect whether it's lost. A probe recovers the failed actual agent and the witness agents. A peer-to-peer message-passing mechanism stands between each actual agent and its witness agents to perform failure detection and recovery through time-bounded information exchange; a log records the actual agent's actions. When failures occur, the system performs rollback recovery to abort uncommitted actions. Moreover, our method uses checkpointed data to recover the lost actual agent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号