共查询到20条相似文献,搜索用时 15 毫秒
1.
The addition of redundancy to data structures can be used to improve the ability of a software system to detect and correct errors, and to continue to operate according to its specifications. A case study is presented which indicates how such redundancy can be deployed and exploited at reasonable cost to improve software fault tolerance. Experimental results are reported for the small data base system considered. 相似文献
2.
3.
ContextAlthough SPEM 2.0 has great potential for software process modeling, it does not provide concepts or formalisms for precise modeling of process behavior. Indeed, SPEM fails to address process simulation, execution, monitoring and analysis, which are important activities in process management. On the other hand, BPMN 2.0 is a widely used notation to model business processes that has associated tools and techniques to facilitate the aforementioned process management activities. Using BPMN to model software development processes can leverage BPMN’s infrastructure to improve the quality of these processes. However, BPMN lacks an important feature to model software processes: a mechanism to represent process tailoring.ObjectiveThis paper proposes BPMNt, a conservative extension to BPMN that aims at creating a tailoring representation mechanism similar to the one found in SPEM 2.0.MethodWe have used the BPMN 2.0 extensibility mechanism to include the representation of specific tailoring relationships namely suppression, local contribution, and local replacement, which establish links between process elements (such as in the case of SPEM). Moreover, this paper also presents some rules to ensure the consistency of BPMN models when using tailoring relationships.ResultsIn order to evaluate our proposal we have implemented a tool to support the BPMNt approach and have applied it for representing real process adaptations in the context of an academic management system development project. Results of this study showed that the approach and its support tool can successfully be used to adapt BPMN-based software processes in real scenarios.ConclusionWe have proposed an approach to enable reuse and adaptation of BPMN-based software process models as well as derivation traceability between models through tailoring relationships. We believe that bringing such capabilities into BPMN will open new perspectives to software process management. 相似文献
4.
5.
鉴于卫星网络对安全性和应对故障的能力有很高的要求,引入了软件定义网络(SDN)技术,在网络中放置中央控制器来增强网络对故障的应对能力。首先,基于SDN的思想设计了一种卫星网络模型,计算了三层轨道上卫星运行的参数并构建星座;然后,采用分层路由的方法,设计了一种针对卫星网络的容错路由机制;最后,在Mininet平台上进行了仿真实验,将容错路由算法(FTR)的实验结果与基于链路感知的星间路由算法(LRSR)和多层卫星网络路由算法(MLSR)的实验结果进行了对比。对比结果表明,在网络中无损坏节点和链路的情况下,FTR的路由总延时比LRSR平均降低了6.06%,说明了引入SDN集中控制的有效性;FTR的丢包率比同样以最小延时为目标的MLSR降低了25.79%,说明了在网络模型中为中轨道(MEO)卫星设计临时存储路由机制的有效性。而当网络中节点和链路的失效情况比较严重时,FTR的路由总延时比LRSR降低了3.99%,比MLSR降低了19.19%;其丢包率比LRSR降低了16.94%,比MLSR降低了37.95%,说明了FTR的容错有效性。实验结果验证了基于SDN的卫星网络路由机制具有更好的容错能力。 相似文献
6.
Software managers are routinely confronted with software projects that contain errors or inconsistencies and exceed budget and time limits. By mining software repositories with comprehensible data mining techniques, predictive models can be induced that offer software managers the insights they need to tackle these quality and budgeting problems in an efficient way. This paper deals with the role that the Ant Colony Optimization (ACO)-based classification technique AntMiner+ can play as a comprehensible data mining technique to predict erroneous software modules. In an empirical comparison on three real-world public datasets, the rule-based models produced by AntMiner+ are shown to achieve a predictive accuracy that is competitive to that of the models induced by several other included classification techniques, such as C4.5, logistic regression and support vector machines. In addition, we will argue that the intuitiveness and comprehensibility of the AntMiner+ models can be considered superior to the latter models. 相似文献
7.
Bounded degree networks like deBruijn graphs or wrapped butterfly networks are very important from VLSI implementation point of view as well as for applications where the computing nodes in the interconnection networks can have only a fixed number of I/O ports. One basic drawback of these networks is that they cannot provide a desired level of fault tolerance because of the bounded degree of the nodes. On the other hand, networks like hypercube (where degree of a node grows with the size of a network) can provide the desired fault tolerance but the design of a node becomes problematic for large networks. In their attempt to combine the best of the both worlds, authors in [IEEE Transactions on Parallel and Distributed Systems 4(9) (1993) 962] proposed hyper-deBruijn (HD) networks that have many additional features of logarithmic diameter, partitionability, embedding, etc. But, HD networks are not regular, are not optimally fault tolerant and the optimal routing is relatively complex. Our purpose in the present paper is to extend the concepts used in the above-mentioned reference to propose a new family of scalable network graphs that retain all the good features of HD networks and at the same time are regular and maximally fault tolerant; the optimal point to point routing algorithm is significantly simpler than that of the HD networks. We have developed some new interesting results on wrapped butterfly networks in the process. 相似文献
8.
Packages are important high-level organizational units for large object-oriented systems. Package-level metrics characterize the attributes of packages such as size, complexity, and coupling. There is a need for empirical evidence to support the collection of these metrics and using them as early indicators of some important external software quality attributes. In this paper, three suites of package-level metrics (Martin, MOOD and CK) are evaluated and compared empirically in predicting the number of pre-release faults and the number of post-release faults in packages. Eclipse, one of the largest open source systems, is used as a case study. The results indicate that the prediction models that are based on Martin suite are more accurate than those that are based on MOOD and CK suites across releases of Eclipse. 相似文献
9.
Software architecture design is an interactive, complex, decision‐making process. Such a design process involves the exploration, evaluation, and composition of design alternatives. Increasingly, new computer‐aided tools are available to help designers in these complex activities. However, these tools do not know how design is actually done, in other words, by means of which design activities the final artefact was obtained. In fact, the architectural design knowledge exclusively rests in the mind of designers, and there is an urgent need to move it, as much as possible, to a computer‐supported environment that enables the capture of this type of knowledge. This contribution addresses this need by introducing a model for capturing how products under development are generated and transformed along the software architecture design process. The proposed model follows an operational perspective, where architectural design decisions are modelled by means of sequences of operations that are applied on the design products. Situation calculus is used to formally express the existence of an object in a given state of a design process. In addition, this formalism allows us expressing without ambiguities when an operation can be performed in a specific state of the design process. 相似文献
10.
ContextSoftware architectures should be evaluated during the early stages of software development in order to verify whether the non-functional requirements (NFRs) of the product can be fulfilled. This activity is even more crucial in software product line (SPL) development, since it is also necessary to identify whether the NFRs of a particular product can be achieved by exercising the variation mechanisms provided by the product line architecture or whether additional transformations are required. These issues have motivated us to propose QuaDAI, a method for the derivation, evaluation and improvement of software architectures in model-driven SPL development.ObjectiveWe present in this paper the results of a family of four experiments carried out to empirically validate the evaluation and improvement strategy of QuaDAI.MethodThe family of experiments was carried out by 92 participants: Computer Science Master’s and undergraduate students from Spain and Italy. The goal was to compare the effectiveness, efficiency, perceived ease of use, perceived usefulness and intention to use with regard to participants using the evaluation and improvement strategy of QuaDAI as opposed to the Architecture Tradeoff Analysis Method (ATAM).ResultsThe main result was that the participants produced their best results when applying QuaDAI, signifying that the participants obtained architectures with better values for the NFRs faster, and that they found the method easier to use, more useful and more likely to be used. The results of the meta-analysis carried out to aggregate the results obtained in the individual experiments also confirmed these results.ConclusionsThe results support the hypothesis that QuaDAI would achieve better results than ATAM in the experiments and that QuaDAI can be considered as a promising approach with which to perform architectural evaluations that occur after the product architecture derivation in model-driven SPL development processes when carried out by novice software evaluators. 相似文献
11.
Sudipto Ghosh Author Vitae John L. Kelly Author Vitae 《Journal of Systems and Software》2008,81(11):2034-2043
Developers using third party software components need to test them to satisfy quality requirements. In the past, researchers have proposed fault injection testing approaches in which the component state is perturbed and the resulting effects on the rest of the system are observed. Non-availability of source code in third-party components makes it harder to perform source code level fault injection. Even if Java decompilers are used, they do not work well with obfuscated bytecode. We propose a technique that injects faults in Java software by manipulating the bytecode. Existing test suites are assessed according to their ability to detect the injected faults and improved accordingly. We present a case study using an open source Java component that demonstrates the feasibility and effectiveness of our approach. We also evaluate the usability of our approach on obfuscated bytecode. 相似文献
12.
In this paper, we propose a permission-based message efficient mutual exclusion (MUTEX) algorithm for mobile ad hoc networks (MANETs). To reduce messages cost, the algorithm uses the “look-ahead” technique, which enforces MUTEX only among the hosts currently competing for the critical section. We propose mechanisms to handle dozes and disconnections of mobile hosts. The assumption of FIFO channel in the original “look-ahead” technique is also relaxed. The proposed algorithm can also tolerate link or host failures, using timeout-based mechanisms. Both analytical and simulation results show that the proposed algorithm works well under various conditions, especially when the mobility is high or load level is low. To our knowledge, this is the first permission-based MUTEX algorithm for MANETs. 相似文献
13.
We describe a single sided matrix converter (SSMC) designed for safety critical applications like flight control actuation systems. Dynamic simulations of multi-phase SSMC using Matlab Simulink are carried out to evaluate the fault tolerance capabilities. Investigation into different numbers of phases and power converter topologies under single phase open circuit, single switch open circuit, and single switch short circuit has been executed. The simulation results confirm 5-phase SSMC design as a compromise between fault tolerance and converter size/volume. A 5-phase SSMC prototype was built. Experimental results verify the effectiveness of our design. 相似文献
14.
Utilization of data mining in software engineering has been the subject of several research papers. Majority of subjects of those paper were in making use of historical data for decision making activities such as cost estimation and product or project attributes prediction and estimation. The ability to predict software fault modules and the ability to correlate relations between faulty modules and product attributes using statistics is the subject of this paper. Correlations and relations between the attributes and the categorical variable or the class are studied through generating a pool of records from each dataset and then select two samples every time from the dataset and compare them. The correlation between the two selected records is studied in terms of changing from faulty to non-faulty or the opposite for the module defect attribute and the value change between the two records in each evaluated attribute (e.g. equal, larger or smaller). The goal was to study if there are certain attributes that are consistently affecting changing the state of the module from faulty to none, or the opposite. Results indicated that such technique can be very useful in studying the correlations between each attribute and the defect status attribute. Another prediction algorithm is developed based on statistics of the module and the overall dataset. The algorithm gave each attribute true class and faulty class predictions. We found that dividing prediction capability for each attribute into those two (i.e. correct and faulty module prediction) facilitate understanding the impact of attribute values on the class and hence improve the overall prediction relative to previous studies and data mining algorithms. Results were evaluated and compared with other algorithms and previous studies. ROC metrics were used to evaluate the performance of the developed metrics. Results from those metrics showed that accuracy or prediction performance calculated traditionally using accurately predicted records divided by the total number of records in the dataset does not necessarily give the best indicator of a good metric or algorithm predictability. Those predictions may give wrong implication if other metrics are not considered with them. The ROC metrics were able to show some other important aspects of performance or accuracy. 相似文献
15.
We propose a differential versioning based data storage (DiVers) architecture for distributed storage systems, which relies on a novel erasure coding technique that exploits sparsity across versions. The emphasis of this work is to demonstrate how sparsity exploiting codes (SEC), originally designed for I/O optimization, can be extended to significantly reduce storage overhead in a repository of versioned data. In addition to facilitating reduced storage, we address some key reliability aspects for DiVers such as (i) mechanisms to deploy the coding technique with arbitrarily varying size of data across versions, and (ii) investigating the right allocation strategy for the encoded blocks over a network of distributed nodes across different versions so as to achieve the best fault tolerance. We also discuss system issues related to the management of data structures for accessing and manipulating the files over the differential versions. 相似文献
16.
This paper focuses on the design of a unique scheme that simultaneously performs fault isolation and fault tolerant control for a class of uncertain nonlinear systems with faults ranging over a finite cover. The proposed framework relies on a supervisory switching among a family of pre-computed candidate controllers without any additional model or filter. The states are ensured to be bounded during the switching delay, which ends when the correct stabilizing controller has been selected. Simulation results about a flexible joint robotic example illustrate the efficiency of the proposed method. 相似文献
17.
Building a software architecture that meets functional requirements is a quite consolidated activity, whereas keeping high quality attributes is still an open challenge. In this paper we introduce an optimization framework that supports the decision whether to buy software components or to build them in-house upon designing a software architecture. We devise a non-linear cost/quality optimization model based on decision variables indicating the set of architectural components to buy and to build in order to minimize the software cost while keeping satisfactory values of quality attributes. From this point of view, our tool can be ideally embedded into a Cost Benefit Analysis Method to provide decision support to software architects. The novelty of our approach consists in building costs and quality attributes on a common set of decision variables related to software development. We start from a special case of the framework where the quality constraints are related to the delivery time and the product reliability, and the model solution also devises the amount of unit testing to be performed on built components. We generalize the framework formulation to represent a broader class of architectural cost-minimization problems under quality constraints, and discuss advantages and limitations of such approach. 相似文献
18.
Social robotics poses tough challenges to software designers who are required to take care of difficult architectural drivers like acceptability, trust of robots as well as to guarantee that robots establish a personalized interaction with their users. Moreover, in this context recurrent software design issues such as ensuring interoperability, improving reusability and customizability of software components also arise. Designing and implementing social robotic software architectures is a time-intensive activity requiring multi-disciplinary expertise: this makes it difficult to rapidly develop, customize, and personalize robotic solutions. These challenges may be mitigated at design time by choosing certain architectural styles, implementing specific architectural patterns and using particular technologies. Leveraging on our experience in the MARIO project, in this paper we propose a series of principles that social robots may benefit from. These principles lay also the foundations for the design of a reference software architecture for social robots. The goal of this work is twofold: (i) Establishing a reference architecture whose components are unambiguously characterized by an ontology thus allowing to easily reuse them in order to implement and personalize social robots; (ii) Introducing a series of standardized software components for social robots architecture (mostly relying on ontologies and semantic technologies) to enhance interoperability, to improve explainability, and to favor rapid prototyping. 相似文献
19.
Ideally, when faults happen, the closed-loop system should be capable of maintaining its present operation. This leads to
the recently studied area of fault-tolerant control (FTC). This paper addresses soft computing and signal processing based
active FTC for benchmark process. Design of FTC has three levels: Level 1 comprises a traditional control loop with sensor
and actuator interface and the controller. Level 2 comprises the functions of online fault detection and identification. Level
3 comprises the supervisor functionality. Online fault detection and identification has signal processing module, feature
extraction module, feature cluster module and fault decision module. Wavelet analysis has been used for signal processing
module. In the feature extraction module, feature vector of the sensor faults has been constructed using wavelet analysis,
sliding window, absolute maximum value changing ratio and variance changing ratio as a statistical analysis. For the feature
cluster module, the self-organizing map (SOM), which is a subtype of artificial neural network has been applied as a classifier
of the feature vector. As a benchmark process three-tank system has been used. Control of the three-tank system is provided
by fuzzy logic controller. Faults are applied to three level sensors. Sensor faults represent incorrect reading from the sensors
that the system is equipped with. When a particular fault occurs in the system, a suitable control scheme has been selected
on-line by supervisor functionality to maintain the closed-loop performance of the system. Active FTC has been achieved by
switch mode control using fuzzy logic controller. Simulation results show that benchmark process has maintained acceptable
performance with FTC for the sensor faults. As a result, when the system has sensor faults soft computing and signal processing
based FTC helps for the best performance of the system. 相似文献
20.
Mobile computing systems provide users with access to information regardless of their geographical location. In these systems, Mobile Support Stations (MSSs) play the role of providing reliable and uninterrupted communication and computing facilities to mobile hosts. The failure of a MSS can cause interruption of services provided by the mobile system. Two basic schemes for tolerating the failure of MSSs exist in the literature. The first scheme is based on the principle of checkpointing used in distributed systems. The second scheme is based on state information replication of mobile hosts in a number of secondary support stations. Depending on the replication scheme used, the second approach is further classified as a pessimistic or an optimistic technique. In this paper, we propose a hybrid scheme which combines the pessimistic and the optimistic replication schemes. In the proposed scheme, an attempt is made to strike a balance between the long delay caused by the pessimistic and the high memory requirements of the optimistic schemes. In order to find the best ratio between the number of pessimistic to the number of optimistic secondary stations in the proposed scheme, we used fuzzy logic. We also used simulation to compare the performance of the proposed scheme with those of the optimistic and the pessimistic schemes. Simulation results showed that the proposed scheme performs better than either schemes in terms of delay and memory requirements. 相似文献