首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Markov chain models are used to evaluate the dependability properties (reliability, safety, availability, maintainability etc.) of the mission-critical systems. Dependability models are often focused only on the basic stuck-at faults. On the other hand the transient faults are present in the operational environment but not included in the dependability prediction. The aim of this paper is to show how the transient faults influence the dependability prediction using the Markov chain model. In this paper basic TMR Markov chain model using stuck-at faults is compared to our extended TMR model considering both the stuck-at and transient faults. The main focus is given on the calculation of the dependability parameter lambda (i.e. the failure rate of the system).  相似文献   

2.
Distributed virtualization changes the pattern of building software systems. However, it brings some problems on dependability assurance owing to the complex social relationships and interactions between service components. The best way to solve the problems in a distributed virtualized environment is dependable service components selection. Dependable service components selection can be modeled as finding a dependable service path, which is a multiconstrained optimal path problem. In this paper, a service components selection method that searches for the dependable service path in a distributed virtualized environment is proposed from the perspective of dependability assurance. The concept of Quality of Dependability is introduced to describe and constrain software system dependability during dynamic composition. Then, we model the dependable service components selection as a multiconstrained optimal path problem, and apply the Adaptive Bonus-Penalty Microcanonical Annealing algorithm to find the optimal dependable service path. The experimental results show that the proposed algorithm has high search success rate and quick converges.  相似文献   

3.
UML (Unified Modeling Language) is a standard design notation which offers the state machines diagram to specify reactive software systems. The “Modeling and Analysis of Real-Time and Embedded systems” profile (MARTE) enables UML with capabilities for performance analysis. MARTE has been specialized in a “Dependability Analysis and Modeling” profile (DAM), then providing UML with dependability assets. In this work, we propose an approach for the automatic transformation of UML-DAM models into Deterministic and Stochastic Petri nets and the subsequent dependability analysis.  相似文献   

4.
对于运行在开放、动态、难控的互联网环境的网构软件,其可信性评估是一个重要课题,但目前大量研究中可信性计算多是基于黑盒的,没有深入考虑系统结构,且评价指标过于单一.因此,提出了一个基于贝叶斯网络的网构软件可信性评估模型.该模型通过对网构软件进行结构分析,根据其结构模式,建立多层的网构软件可信性评估指标体系.基于贝叶斯网络采用自底向上逐层分析计算的方法,对网构软件的各组成实体及其系统整体的多方面可信性指标进行评估,形成统一的可信性结果,并使用客观数据对其进行修正.实验证明,该模型可以明确、客观地对网构软件的可信性进行评估,并能够对网构软件的设计、开发和部署提供帮助.  相似文献   

5.
The software engineering discipline's continuing growth urges a shift toward value-based software engineering, to prove its contributions to the bottom line. However, the number of studies containing return-on-investment (ROI) data on software process improvement (SPI) hasn't grown much in the five years following the 2004 IEEE Software article "Measuring the ROI of Software Process Improvement." Because software engineering is so strongly related to the success and failure of business, we must find a way to measure value. The adoption of Six Sigma might just be the catalyst needed for the software engineering discipline. Software companies that embrace Six Sigma to structure their improvements will outclass competitors because of their skill in having their improvement efforts contribute directly to the financial bottom line.  相似文献   

6.
7.
数字微流控芯片常用于安全关键领域,其可靠性成为设计和测试的重要准则。为保证数字微流控芯片的系统可靠性,需要对其进行全面的测试,而为了实现重配置,必须对芯片阵列进行准确的故障诊断。本文提出了一种多故障的诊断方法,首先对芯片阵列进行行列并行测试,识别出存在故障的行和列,再利用改进二进搜索对这些故障行列进行故障定位。改进二进搜索可以利用多个有效的无故障路径进行测试,为了有效地为二进搜索寻找有效的搜索路径,给出了相应的贪婪算法。诊断故障覆盖率用来衡量多故障诊断方法的有效性。实验结果表明,相对传统的二进搜索方法,本方法可以更有效地对多故障进行定位。  相似文献   

8.
The idea of a real-time enterprise is gaining momentum very quickly. Although implementing a real-time selection is a complex and significant undertaking, the ROI is intuitively obvious and quantitatively verifiable. Managers must be provided an environment that allows them to make quicker, more informed business decisions to achieve their objectives. Managing companies with a mix of real-time and historical information is the best of both worlds. This can be accom-plished through an evolutionary process of integration and accommodation. Existing system investments can be leveraged, ultimately allowing applications to provide the ROI they promised.  相似文献   

9.
Dependability of a system is commonly referred to its reliability, its availability and its maintenability (RAM), but when this concept is applied to user interfaces there is no common agreement on what aspects of user–system interaction are related to a satisfactory RAM level for the whole system. In particular, when dealing with haptic systems, interface dependability may become a crucial issue in medical and in military domains when life-critical systems are to be manipulated or where costly remote control operations are to be performed, like in industrial processes control or in aerospace/automotive engineering and manufacturing. This paper discusses the role of dependability in haptic user interfaces, aiming to the definition of a framework for the assessment of the usability and dependability properties of haptic systems and their possible correlations. The research is based on the analysis of a visual–haptic-based simulator targeted to maintenance activity training for aerospace industry which is taken as a case study. As a result, we propose a novel framework able to collect and then process relevant interaction data during the execution of haptic tasks, enabling to analyze dependability vs. usability correlations.  相似文献   

10.
Context: Many modern software systems must deal with changes and uncertainty. Traditional dependability requirements engineering is not equipped for this since it assumes that the context in which a system operates be stable and deterministic, which often leads to failures and recurrent corrective maintenance. The Contextual Goal Model (CGM), a requirements model that proposes the idea of context-dependent goal fulfillment, mitigates the problem by relating alternative strategies for achieving goals to the space of context changes. Additionally, the Runtime Goal Model (RGM) adds behavioral constraints to the fulfillment of goals that may be checked against system execution traces.Objective: This paper proposes GODA (Goal-Oriented Dependability Analysis) and its supporting framework as concrete means for reasoning about the dependability requirements of systems that operate in dynamic contexts.Method: GODA blends the power of CGM, RGM and probabilistic model checking to provide a formal requirements specification and verification solution. At design time, it can help with design and implementation decisions; at runtime it helps the system self-adapt by analyzing the different alternatives and selecting the one with the highest probability for the system to be dependable. GODA is integrated into TAO4ME, a state-of-the-art tool for goal modeling and analysis.Results: GODA has been evaluated against feasibility and scalability on Mobee: a real-life software system that allows people to share live and updated information about public transportation via mobile devices, and on larger goal models. GODA can verify, at runtime, up to two thousand leaf-tasks in less than 35ms, and requires less than 240 KB of memory.Conclusion: Presented results show GODA’s design-time and runtime verification capabilities, even under limited computational resources, and the scalability of the proposed solution.  相似文献   

11.
One major development within business practice is the increasing interest in customer relationship management (CRM) in recent years. CRM thereby focuses on establishing and maintaining profitable relationships with the customer using modern information technology (IT) and has emerged as a major research field in business and information systems engineering. However, despite huge investments many CRM projects fail to achieve their objectives as the complex and interdisciplinary nature of CRM is not addressed adequately. In fact an adoption of a customer-centric orientation within a value-based management requires not only a cross-functional integration of different business departments but also a selectively adjusted collaboration of those departments. This article provides an overview of the state of the art of CRM in literature as well as current practices in companies. Furthermore it outlines the specific challenges of a value-based CRM for the cross-functional integration and collaboration of marketing, financial management, and IT. Thus, in addition to a mutual alignment of marketing and IT, a value-based analysis, planning, and controlling of CRM-activities requires the development and implementation of standardized performance measurements and their adequate IT-support.  相似文献   

12.
Network virtualization has been pointed as a promising approach to solve Internet’s current ossification. A major challenge is the mapping of virtual networks (VNs) onto the substrate network due to its NP-hard nature, and, thus, several heuristics have been proposed aiming to achieve efficient allocations. However, the existing approaches only focus on performance, and dependability issues are usually neglected. Dependability involves metrics such as reliability and availability, which directly impact the quality of service, and a limitation is the absence of mechanisms for estimating dependability metrics in virtual networks. This paper proposes an automated approach for estimating dependability metrics in virtual network environments and a mapping algorithm based on GRASP metaheuristic. The approach is based on stochastic Petri nets (SPN) and reliability block diagrams (RBD), and a tool is also presented for automating model generation and evaluation. Experimental results demonstrate the feasibility of the proposed technique, as well as the trade-off between VN’s availability and cost.  相似文献   

13.
安全和可靠是数据库系统两个重要的可信指标,它们的策略配置与系统的高效运行是有冲突的.本文分析了安全、可靠等可信指标及其相关测试工具,提出了可信赖性能基准程序性能测试特点,以指导用户制定、评价、选择数据库安全可靠策略的实施.最后介绍了基于TPC性能基准改进的数据库系统安全代价测试工具功能及其实施方法.  相似文献   

14.
Programs fail mainly for two reasons: logic errors in the code and exception failures. Exception failures can account for up to two-thirds of system crashes, hence, are worthy of serious attention. Traditional approaches to reducing exception failures, such as code reviews, walkthroughs, and formal testing, while very useful, are limited in their ability to address a core problem: the programmer's inadequate coverage of exceptional conditions. The problem of coverage might be rooted in cognitive factors that impede the mental generation (or recollection) of exception cases that would pertain in a particular situation, resulting in insufficient software robustness. This paper describes controlled experiments for testing the hypothesis that robustness for exception failures can be improved through the use of various coverage-enhancing techniques: N-version programming, group collaboration, and dependability cases. N-version programming and collaboration are well known. Dependability cases, derived from safety cases, comprise a new methodology based on structured taxonomies and memory aids for helping software designers think about and improve exception handling coverage. All three methods showed improvements over control conditions in increasing robustness to exception failures but dependability cases proved most efficacious in terms of balancing cost and effectiveness  相似文献   

15.
从价值的角度考虑软件测试过程能够增加软件的盈利.对软件测试过程创造的价值进行量化,构造了直观实用的软件测试工作量估算模型,可估算测试阶段的测试工作量和修改缺陷工作量,为制定和调整测试计划提供有用信息, 模型描述了软件测试过程中的各活动与所创造价值之间的关系,并解释了缺陷修改活动依然会引入新缺陷这一常被忽略的事实.通过一个应用实例证明,该模型有较好的可用性和有效性.  相似文献   

16.
《Theoretical computer science》2003,290(2):1223-1251
Dependability is a qualitative term referring to a system's ability to meet its service requirements in the presence of faults. The types and number of faults covered by a system play a primary role in determining the level of dependability which that system can potentially provide. Given the variety and multiplicity of fault types, to simplify the design process, the system algorithm design often focuses on specific fault types, resulting in either over-optimistic (all fault permanent) or over-pessimistic (all faults malicious) dependable system designs.A more practical and realistic approach is to recognize that faults of varied severity levels and of differing occurrence probabilities may appear as combinations rather than the assumed single fault type occurrences. The ability to allow the user to select/customize a particular combination of fault types of varied severity characterizes the proposed customizable fault/error model (CFEM). The CFEM organizes diverse fault categories into a cohesive framework by classifying faults according to the effect they have on the required system services rather than by targeting the source of the fault condition. In this paper, we develop (a) the complete framework for the CFEM fault classification, (b) the voting functions applicable under the CFEM, and (c) the fundamental distributed services of consensus and convergence under the CFEM on which dependable distributed functionality can be supported.  相似文献   

17.
Internet-scale software becomes more and more important as a mode to construct software systems when Internet is developing rapidly. Internet-scale software comprises a set of widely distributed software entities which are running in open, dynamic and uncontrollable Internet environment. There are several aspects impacting dependability of Internet-scale software, such as technical, organizational, decisional and human aspects. It is very important to evaluate dependability of Internet-scale software by integrating all the aspects and analyzing system architecture from the most foundational elements. However, it is lack of such an evaluation model. An evaluation model of dependability for Internet-scale software on the basis of Bayesian Networks is proposed in this paper. The structure of Internet-scale software is analyzed. An evaluating system of dependability for Internet-scale software is established. It includes static metrics, dynamic metrics, prior metrics and correction metrics. A process of trust attenuation based on assessment is proposed to integrate subjective trust factors and objective dependability factors which impact on system quality. In this paper, a Bayesian Network is build according to the structure analysis. A bottom-up method that use Bayesian reasoning to analyses and calculate entity dependability and integration dependability layer by layer is described. A unified dependability of the whole system is worked out and is corrected by objective data. The analysis of experiment in a real system proves that the model in this paper is capable of evaluating the dependability of Internet-scale software clearly and objectively. Moreover, it offers effective help to the design, development, deployment and assessment of Internet-scale software.  相似文献   

18.
Complex real-time system design needs to address dependability requirements, such as safety, reliability, and security. We introduce a modelling and simulation based approach which allows for the analysis and prediction of dependability constraints. Dependability can be improved by making use of fault tolerance techniques. The de-facto example, in the real-time system literature, of a pump control system in a mining environment is used to demonstrate our model-based approach. In particular, the system is modelled using the Discrete EVent system Specification (DEVS) formalism, and then extended to incorporate fault tolerance mechanisms. The modularity of the DEVS formalism facilitates this extension. The simulation demonstrates that the employed fault tolerance techniques are effective. That is, the system performs satisfactorily despite the presence of faults. This approach also makes it possible to make an informed choice between different fault tolerance techniques. Performance metrics are used to measure the reliability and safety of the system, and to evaluate the dependability achieved by the design. In our model-based development process, modelling, simulation and eventual deployment of the system are seamlessly integrated.  相似文献   

19.
Software firms invest in process improvements in order to benefit from decreased costs and/or increased productivity sometime in the future. Such efforts are seldom cheap, and they typically require making a business case in order to obtain funding. We review some of the main techniques from financial theory for evaluating the risk and returns associated with proposed investments and apply them to process improvement programs for software development. We also discuss significant theoretical considerations as well as robustness and correctness issues associated with applying each of the techniques to software development and process improvement activities. Finally we introduce a present value technique that incorporates both risk and return that has many applications to software development activities and is recommended for use in a software process improvement context.  相似文献   

20.
Software Defined Networking (SDN) is a new network design paradigm that aims at simplifying the implementation of complex networking infrastructures by separating the forwarding functionalities (data plane) from the network logical control (control plane). Network devices are used only for forwarding, while decisions about where data is sent are taken by a logically centralized yet physically distributed component, i.e., the SDN controller. From a quality of service (QoS) point of view, an SDN controller is a complex system whose operation can be highly dependent on a variety of parameters, e.g., its degree of distribution, the corresponding topology, the number of network devices to control, and so on. Dependability aspects are particularly critical in this context. In this work, we present a new analytical modeling technique that allows us to represent an SDN controller whose components are organized in a hierarchical topology, focusing on reliability and availability aspects and overcoming issues and limitations of Markovian models. In particular, our approach allows to capture changes in the operating conditions (e.g., in the number of managed devices) still allowing to represent the underlying phenomena through generally distributed events. The dependability of a use case on a two-layer hierarchical SDN control plane is investigated through the proposed technique providing numerical results to demonstrate the feasibility of the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号