共查询到20条相似文献,搜索用时 0 毫秒
1.
The increasing pervasiveness of computing services in everyday life, combined with the dynamic nature of their execution contexts, constitutes a major challenge in guaranteeing the expected quality of such services at runtime. Quality of Service (QoS) contracts have been proposed to specify expected quality levels (QoS levels) on different context conditions, with different enforcing mechanisms. In this paper we present a definition for QoS contracts as a high-level policy for governing the behavior of software systems that self-adapt at runtime in response to context changes. To realize this contract definition, we specify its formal semantics and implement it in a software framework able to execute and reconfigure software applications, in order to maintain fulfilled their associated QoS contracts. The contribution of this paper is threefold. First, we extend typed-attributed graph transformation systems and finite-state machines, and use them as denotations to specify the semantics of QoS contracts. Second, this semantics makes it possible to systematically exploit design patterns at runtime by dynamically deploying them in the managed software application. Third, our semantics guarantees self-adaptive properties such as reliability and robustness in the contract satisfaction. Finally, we evaluate the applicability of our semantics implementation by integrating and executing it in FraSCAti, a multi-scale component-based middleware, in three case studies. 相似文献
2.
Domain analysis of dynamic system reconfiguration 总被引:1,自引:0,他引:1
3.
One of the most promising approaches in developing component-based (possibly distributed) systems is that of coordination models and languages. Coordination programming enjoys a number of advantages such as the ability to express different software architectures and abstract interaction protocols, support for multi-linguality, reusability and programming-in-the-large, etc. Configuration programming is another promising approach in developing large scale, component-based systems, with the increasing need for supporting the dynamic evolution of components. In this paper we explore and exploit the relationship between the notions of coordination and (dynamic) configuration and we illustrate the potential of control- or event-driven coordination languages to be used as languages for expressing dynamically reconfigurable software architectures. We argue that control-driven coordination has similar goals and aims with the notion of dynamic configuration and we illustrate how the former can achieve the functionality required by the latter. 相似文献
4.
Wei Li 《Journal of Software: Evolution and Process》2009,21(1):19-48
The significance of QoS‐assurance is being increasingly recognized by both the research and wider communities. In the latter case, this recognition is driven by the increasing adoption by business of 24/7 software systems and the QoS decline that end‐users experience when these systems undergo dynamic reconfiguration. At the beginning of 2006, the author set up a project named DynaQoS©‐RDF (QoS‐assurance of Dynamic Reconfiguration on Reconfigurable Dataflow Model), which was then sponsored by the CQ University Australia. Over the last two years, the author has investigated QoS‐assurance for dataflow systems, which are characterized by the pipe‐and‐filter architecture. The research has addressed issues such as: the global consistency of protocol transactions, the necessary and sufficient conditions for QoS‐assurance, execution overhead control for reconfiguration, state transfer for stateful components, and the design of a QoS benchmark. This paper discusses these research issues. It also proposes various QoS strategies and presents a benchmark for evaluating QoS‐assurance strategies for the dynamic reconfiguration of dataflow systems. This benchmark is implemented using the DynaQoS©‐RDF v1.0 software platform. Various strategies, including those from the research literature are benchmarked, and the best efforts for QoS‐assurance are identified. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
5.
在移动AdHoc网络中,为不同的多媒体应用需求提供QoS保证已经成为一个热点问题。目前,大多数的研究者都将研究点关注于QoS路由的度量选择上,而忽视了移动AdHoc网络自身的能量局限性的问题,而能量又是影响AdHoc网络一个极其关键的因素。因此,在本文中我们使用一种动态规划算法来解决AdHoc网络的电池约束。首先,我们构建了一个基于动态规划法的QoS路由模型,然后分析了该算法的时间复杂度,最后通过仿真验证了本文提出的改进算法的优越性。 相似文献
6.
FPGA远程动态重构技术的研究 总被引:1,自引:1,他引:0
提出了一种FPGA远程动态重构的方法,结合FPGA动态重构技术和GSM通信技术来实现.利用GSM技术实现配置数据的无线传输,在单片机控制下将数据存储于CF卡中.在内嵌硬核微处理器PowerPC405控制下,FPGA通过内部配置存取端口读取CF卡中新的配置数据,对可重构区进行配置以实现新的功能. 相似文献
7.
《国际计算机数学杂志》2012,89(11):2265-2278
Implemented by dynamic service composition and integration, Web application has significantly affected our daily life, such as e-commerce and e-government. However, the open and ever-changing environment makes Web users more vulnerable to the usability problem, i.e. unreachable pages and reduced responsiveness. Accordingly, there is a need to deliver reliable Web application with attributes that cover the correctness and reliability. For the efficient handling of failures, the compatibility verification of dynamic reconfiguration strategies is attached great importance since it can guarantee the robustness and high quality of Web-based software. This paper extends the classical finite state machine (FSM) to formalize the behaviour of Web application, namely the extended FSM for Web applications (EFSM4WA) model. This model is also suitable to formally describe the interaction behaviours of dynamic reconfiguration when Web application encountered failure. Then, the compatibility verification of dynamic reconfiguration is carried out in two phases. During the first phase, it adopts the trace projection approach to check the compatibility against the synchronized product model in a qualitative way, which will select a set of candidate Web applications. During the second phase, it takes performance into consideration to choose a high-reliability Web application in a quantitative way. Finally, a case study is demonstrated to show the applicability of our approach. 相似文献
8.
Joaquín EntrialgoAuthor Vitae 《Journal of Systems and Software》2011,84(5):810-820
In transactional systems, the objectives of quality of service regarding are often specified by Service Level Objectives (SLOs) that stipulate a response time to be achieved for a percentile of the transactions. Usually, there are different client classes with different SLOs. In this paper, we extend a technique that enforces the fulfilment of the SLOs using admission control. The admission control of new user sessions is based on a response-time model. The technique proposed in this paper dynamically adapts the model to changes in workload characteristics and system configuration, so that the system can work autonomically, without human intervention. The technique requires no knowledge about the internals of the system; thus, it is easy to use and can be applied to many systems. Its utility is demonstrated by a set of experiments on a system that implements the TPC-App benchmark. The experiments show that the model adaptation works correctly in very different situations that include large and small changes in response times, increasing and decreasing response times, and different patterns of workload injection. In all this scenarios, the technique updates the model progressively until it adjusts to the new situation and in intermediate situations the model never experiences abnormal behaviour that could lead to a failure in the admission control component. 相似文献
9.
The strategic dynamic supply chain reconfiguration (DSCR) problem is to prescribe the location and capacity of each facility, select links used for transportation, and plan material flows through the supply chain, including production, inventory, backorder, and outsourcing levels. The objective is to minimize total cost. The network must be dynamically reconfigured (i.e., by opening facilities, expanding and/or contracting their capacities, and closing facilities) over time to accommodate changing trends in demand and/or costs. The problem involves a multi-period, multi-product, multi-echelon supply chain. Research objectives of this paper are a traditional formulation and a network-based model of the DSCR problem; tests to promote intuitive interpretation of our models; tests to identify computational characteristics of each model to determine if one offers superior solvability; and tests to identify sensitivity of run time relative to primary parameters. 相似文献
10.
Reusing software through copying and pasting is a continuous plague in software development despite the fact that it creates serious maintenance problems. Various techniques have been proposed to find duplicated redundant code (also known as software clones). A recent study has compared these techniques and shown that token-based clone detection based on suffix trees is fast but yields clone candidates that are often not syntactic units. Current techniques based on abstract syntax trees—on the other hand—find syntactic clones but are considerably less efficient. This paper describes how we can make use of suffix trees to find syntactic clones in abstract syntax trees. This new approach is able to find syntactic clones in linear time and space. The paper reports the results of a large case study in which we empirically compare the new technique to other techniques using the Bellon benchmark for clone detectors. The Bellon benchmark consists of clone pairs validated by humans for eight software systems written in C or Java from different application domains. The new contributions of this paper over the conference paper are the additional analysis of Java programs, the exploration of an alternative path that uses parse trees instead of abstract syntax trees, and the investigation of the impact on recall and precision when clone analyses insist on consistent parameter renaming. 相似文献
11.
An important aspect of the quality assurance of large component repositories is to ensure the logical coherence of component metadata, and to this end one needs to identify incoherences as early as possible. Some relevant classes of problems can be formulated in term of properties of the future repositories into which the current repository may evolve. However, checking such properties on all possible future repositories requires a way to construct a finite representation of the infinite set of all potential futures. A class of properties for which this can be done is presented in this work.We illustrate the practical usefulness of the approach with two quality assurance applications: (i) establishing the amount of “forced upgrades” induced by introducing new versions of existing components in a repository, and (ii) identifying outdated components that are currently not installable and need to be upgraded in order to become installable again. For both applications we provide experience reports obtained on the Debian free software distribution. 相似文献
12.
分析了保证WLAN服务质量的EDCA协议的工作机制,提出了通过改变不同接入类别计算公式中的参数来改善WLAN的性能。仿真结果表明,通过改变不同接入类别的竞争窗口,可以使整个网络的平均延迟减少,而吞吐率却有大幅提高。 相似文献
13.
Debugging deployed systems is an arduous and time consuming task. It is often difficult to generate traces from deployed systems due to the disturbance and overhead that trace collection may cause on a system in operation. Many organizations also do not keep historical traces of failures. On the other hand earlier techniques focusing on fault diagnosis in deployed systems require a collection of passing–failing traces, in-house reproduction of faults or a historical collection of failed traces. In this paper, we investigate an alternative solution. We investigate how artificial faults, generated using software mutation in test environment, can be used to diagnose actual faults in deployed software systems. The use of traces of artificial faults can provide relief when it is not feasible to collect different kinds of traces from deployed systems. Using artificial and actual faults we also investigate the similarity of function call traces of different faults in functions. To achieve our goal, we use decision trees to build a model of traces generated from mutants and test it on faulty traces generated from actual programs. The application of our approach to various real world programs shows that mutants can indeed be used to diagnose faulty functions in the original code with approximately 60–100% accuracy on reviewing 10% or less of the code; whereas, contemporary techniques using pass–fail traces show poor results in the context of software maintenance. Our results also show that different faults in closely related functions occur with similar function call traces. The use of mutation in fault diagnosis shows promising results but the experiments also show the challenges related to using mutants. 相似文献
14.
The development of high quality large-scale software systems within schedule and budget constraints is a formidable software engineering challenge. The modification of these systems to incorporate new and changing capabilities poses an even greater challenge. This modification activity must be performed without adversely affecting the quality of the existing system. Unfortunately, this objective is rarely met. Software modifications often introduce undesirable side-effects, leading to reduced quality. In this paper, the software modification process for a large, evolving real-time system is analysed using causal analysis. Causal analysis is a process for achieving quality improvements via fault prevention. The fault prevention stems from a careful analysis of faults in search of their causes. This paper reports our use of causal analysis on several significant modification activities resulting in about two hundred defects. Recommendations for improved software modification and quality assurance processes based on our findings are also presented. 相似文献
15.
Frank Elberzhager Jürgen Münch Vi Tran Ngoc Nha 《Information and Software Technology》2012,54(1):1-15
Context
A lot of different quality assurance techniques exist to ensure high quality products. However, most often they are applied in isolation. A systematic combination of different static and dynamic quality assurance techniques promises to exploit synergy effects, such as higher defect detection rates or reduced quality assurance costs. However, a systematic overview of such combinations and reported evidence about achieving synergy effects with such kinds of combinations is missing.Objective
The main goal of this article is the classification and thematic analysis of existing approaches that combine different static and dynamic quality assurance technique, including reported effects, characteristics, and constraints. The result is an overview of existing approaches and a suitable basis for identifying future research directions.Method
A systematic mapping study was performed by two researchers, focusing on four databases with an initial result set of 2498 articles, covering articles published between 1985 and 2010.Results
In total, 51 articles were selected and classified according to multiple criteria. The two main dimensions of a combination are integration (i.e., the output of one quality assurance technique is used for the second one) and compilation (i.e., different quality assurance techniques are applied to ensure a common goal, but in isolation). The combination of static and dynamic analyses is one of the most common approaches and usually conducted in an integrated manner. With respect to the combination of inspection and testing techniques, this is done more often in a compiled way than in an integrated way.Conclusion
The results show an increased interest in this topic in recent years, especially with respect to the integration of static and dynamic analyses. Inspection and testing techniques are currently mostly performed in an isolated manner. The integration of inspection and testing techniques is a promising research direction for the exploitation of additional synergy effects. 相似文献16.
R. Kia A. Baboli N. Javadian R. Tavakkoli-Moghaddam M. Kazemi J. Khorrami 《Computers & Operations Research》2012
This paper presents a novel mixed-integer non-linear programming model for the layout design of a dynamic cellular manufacturing system (DCMS). In a dynamic environment, the product mix and part demands are varying during a multi-period planning horizon. As a result, the best cell configuration for one period may not be efficient for successive periods, and thus it necessitates reconfigurations. Three major and interrelated decisions are involved in the design of a CMS; namely cell formation (CF), group layout (GL) and group scheduling (GS). A novel aspect of this model is concurrently making the CF and GL decisions in a dynamic environment. The proposed model integrating the CF and GL decisions can be used by researchers and practitioners to design GL in practical and dynamic cell formation problems. Another compromising aspect of this model is the utilization of multi-rows layout to locate machines in the cells configured with flexible shapes. Such a DCMS model with an extensive coverage of important manufacturing features has not been proposed before and incorporates several design features including alternate process routings, operation sequence, processing time, production volume of parts, purchasing machine, duplicate machines, machine capacity, lot splitting, intra-cell layout, inter-cell layout, multi-rows layout of equal area facilities and flexible reconfiguration. The objective of the integrated model is to minimize the total costs of intra and inter-cell material handling, machine relocation, purchasing new machines, machine overhead and machine processing. Linearization procedures are used to transform the presented non-linear programming model into a linearized formulation. Two numerical examples taken from the literature are solved by the Lingo software using a branch-and-bound method to illustrate the performance of this model. An efficient simulated annealing (SA) algorithm with elaborately designed solution representation and neighborhood generation is extended to solve the proposed model because of its NP-hardness. It is then tested using several problems with different sizes and settings to verify the computational efficiency of the developed algorithm in comparison with the Lingo software. The obtained results show that the proposed SA is able to find the near-optimal solutions in computational time, approximately 100 times less than Lingo. Also, the computational results show that the proposed model to some extent overcomes common disadvantages in the existing dynamic cell formation models that have not yet considered layout problems. 相似文献
17.
MPLS网络的优势不仅仅在于提高了数据包路由和转发的速度,同时也提供了高效的服务质量(QoS)。如何在MPLS VPN中实现这个功能是本文研究的重点。MPLS VPN采用基于“软管模型”的算法,与DiffServ等相关技术结合后,能够满足大部分应用的QoS需要。 相似文献
18.
网络仿真是一种全新的通信网络规划、设计与分析技术,它能够验证分析实际网络建设方案的有效性和可行性,并可为通信网络规划与设计提供定量依据.本文在介绍OPNET软件的层次化通信网建模方法基础上,给出应用OPNET进行通信网服务质量(QoS)和性能仿真分析的具体实现方法,从网络技术机制、网络性能以及QoS等方面对网络设计方案进行综合评估.并以ATM网为例,应用该方法进行了QoS和性能仿真分析,分析了ATM网络可用比特率ABR和恒定比特率CBR两种服务类型的性能以及服务质量,仿真结果显示,与实际网络运行结果一致,表明该方法的有效性. 相似文献
19.
近年来,随着我国教育信息化的高速发展,校园网上VOD多媒体教学视频流的传输面临着越来越多的服务质量(Quality of Service,QoS)需求。当今的基于Ipv4的网络在流量、延迟和网络带宽管理上功能很弱,不可能给这些流媒体应用提供真正的QoS控制和保证。构建新的Ipv6校园网,在流媒体技术的基础上,结合Ipv6技术与组播技术,才能更好的解决校园网上流媒体传输的QoS问题。 相似文献
20.
《International journal of human-computer studies》2014,72(1):77-99
Evolution in the context of use requires evolutions in the user interfaces even when they are currently used by operators. User Centered Development promotes reactive answers to this kind of evolutions either by software evolutions through iterative development approaches or at runtime by providing additional information to the operators such as contextual help for instance. This paper proposes a model-based approach to support proactive management of context of use evolutions. By proactive management we mean mechanisms in place to plan and implement evolutions and adaptations of the entire user interface (including behaviour) in a generic way. The approach proposed handles both concentration and distribution of user interfaces requiring both fusion of information into a single UI or fission of information into several ones. This generic model-based approach is exemplified on a safety critical system from space domain. It presents how the new user interfaces can be generated at runtime to provide a new user interface gathering in a single place all the information required to perform the task. These user interfaces have to be generated at runtime as new procedures (i.e. sequences of operations to be executed in a semi-autonomous way) can be defined by operators at any time in order to react to adverse events and to keep the space system in operation. Such contextual, activity-related user interfaces complement the original user interfaces designed for operating the command and control system. The resulting user interface thus corresponds to a distribution of user interfaces in a focus+context way improving usability by increasing both efficiency and effectiveness. 相似文献