首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Sporadic operations such as rolling upgrade or machine instance redeployment are prone to unpredictable failures in the public cloud largely because of the inherent high variability nature of public cloud. Previous dependability research has established several recovery methods for cloud failures. In this paper, we first propose eight recovery patterns for sporadic operations on public cloud. We then present the filtering process which filters applicable recovery patterns. We propose an automation mechanism to automatically generate recovery actions for those applicable recovery patterns based on our resource state transition algorithm. We also propose a methodology to evaluate the recovery actions generated for the applicable recovery patterns based on the recovery evaluation metrics of Recovery Time, Recovery Cost, and Recovery Impact. This quantitative evaluation will lead to selection of the acceptable recovery actions. We propose two recovery actions selection mechanisms: one is based on user constraints of the recovery evaluation metrics, and the other one is based on Pareto set searching algorithm. We implement a recovery service and illustrate its applicability by recovering from errors occurring in the rolling upgrade operation on AWS cloud.  相似文献   

2.
This paper presents a comprehensive review on methods for real-time schedule recovery in transportation services. The survey concentrates on published research on recovery of planned schedules in the occurrence of one or several severe disruptions such as vehicle breakdowns, accidents, and delays. Only vehicle assignment and rescheduling are reviewed; crew scheduling and passenger logistics problems during disruptions are not. Real-time vehicle schedule recovery problems (RTVSRP) are classified into three classes: vehicle rescheduling, for road-based services, train-based rescheduling, and airline schedule recovery problems. For each class, a classification of the models is presented based on problem formulations and solution strategies. The paper concludes that RTVSRP is a challenging problem that requires quick and good quality solutions to very difficult and complex situations, involving several different contexts, restrictions, and objectives. The paper also identifies research gaps to be investigated in the future, stimulating research in this area.  相似文献   

3.
In comparative genomics, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given d genomic maps as sequences of gene markers, the objective of MSR-d is to find d subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant d≥2, a polynomial-time 2d-approximation for MSR-d was previously known. In this paper, we show that for any d≥2, MSR-d is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provide the first explicit lower bounds on approximating MSR-d for all d≥2. In particular, we show that MSR-d is NP-hard to approximate within Ω(d/logd). From the other direction, we show that the previous 2d-approximation for MSR-d can be optimized into a polynomial-time algorithm even if d is not a constant but is part of the input. We then extend our inapproximability results to several related problems including CMSR-d, δ-gap-MSR-d, and δ-gap-CMSR-d.  相似文献   

4.
We study the problem of parallel computation of a schedule for a system of n unit-length tasks on m identical machines, when the tasks are related by a set of precedence constraints. We present NC algorithms for computing an optimal schedule in the case where m, the number of available machines, does not vary with time and the precedence constraints are represented by a collection of outtrees. The algorithms run on an exclusive read, exclusive write (EREW) PRAM. Their complexities are O(log n) and O((log n)2) parallel time using O(n2) and O(n) processors, respectively. The schedule computed by our algorithms is a height-priority schedule. As a complementary result we show that it is very unlikely that computing such a schedule is in NC when any of the above conditions is significantly relaxed. We prove that the problem is P-complete under logspace reductions when the precedence constraints are a collection of intrees and outtrees, or for a collection of outtrees when the number of available machines is allowed to increase with time. The time span of a height-priority schedule for an arbitrary precedence constraints graph is at most 2 − 1/(m − 1) times longer than the optimal (N. E Chen and C. L. Liu, Proc. 1974 Sagamore Computer Conference on Parallel Processing, T. Fend (Ed.), Springer-Verlag, Berlin, 1975, pp. 1–16). Whereas it is P-complete to produce the classical height-priority schedules even for very restricted precedence constraints graphs, we present a simple NC parallel algorithm which produces a different schedule that is only 2 − 1/m times the optimal.  相似文献   

5.
Given an undirected graph G=(V,E), the Graph Coloring Problem (GCP) consists in assigning a color to each vertex of the graph G in such a way that any two adjacent vertices are assigned different colors, and the number of different colors used is minimized. State-of-the-art algorithms generally deal with the explicit constraints in GCP: any two adjacent vertices should be assigned different colors, but do not specially deal with the implicit constraints between non-adjacent vertices implied by the explicit constraints. In this paper, we propose an exact algorithm with learning for GCP which exploits the implicit constraints using propositional logic. Our algorithm is compared with several exact algorithms among the best in the literature. The experimental results show that our algorithm outperforms other algorithms on many instances. Specifically, our algorithm allows to close the open DIMACS instance 4-Fullins_5.  相似文献   

6.
Cyclic scheduling has been widely studied because of the importance of applications in manufacturing systems and in computer science. For this class of problems, a finite set of tasks with precedence relations and resource constraints must be executed repetitively while maximizing the throughput. Many applications also require that execution schedules be periodic i.e. the execution of each task is repeated with a fixed global period w.The present paper develops a new method to build periodic schedules with cumulative resource constraints, periodic release dates and deadlines. The main idea is to fix the period w, to unwind the cyclic scheduling problem for some number of iterations, and to add precedence relations so that the minimum time lag between two successive executions of any task equals w. Then, using any usual (not cyclic) scheduling algorithm to compute task starting times for the unwound problem, we prove that either the method converges to a periodic schedule of period w or it fails to compute a schedule. A non-polynomial upper bound on the number of iterations to unwind in order to guarantee that cyclic precedence relations and resource constraints are fulfilled is also provided. This method is successfully applied to a real-life problem, namely the software pipelining of inner loops on an embedded VLIW processor core by using a Graham list scheduling algorithm.  相似文献   

7.
This paper outlines a method for solving the stereovision matching problem using edge segments as the primitives. In stereovision matching the following constraints are commonly used: epipolar, similarity, smoothness, ordering and uniqueness. We propose a new matching strategy under a fuzzy context in which such constraints are mapped. The fuzzy context integrates both Fuzzy Clustering and Fuzzy Cognitive Maps. With such purpose a network of concepts (nodes) is designed, each concept represents a pair of primitives to be matched. Each concept has associated a fuzzy value which determines the degree of the correspondence. The goal is to achieve high performance in terms of correct matches. The main findings of this paper are reflected in the use of the fuzzy context that allows building the network of concepts where the matching constraints are mapped. Initially, each concept value is loaded via the Fuzzy Clustering and then updated by the Fuzzy Cognitive Maps framework. This updating is achieved through the influence of the remainder neighboring concepts until a good global matching solution is achieved. Under this fuzzy approach we gain quantitative and qualitative matching correspondences. This method works as a relaxation matching approach and its performance is illustrated by comparative analysis against some existing global matching methods.  相似文献   

8.
《Information Systems》2005,30(2):89-118
Business rules are the basis of any organization. From an information systems perspective, these business rules function as constraints on a database helping ensure that the structure and content of the real world—sometimes referred to as miniworld—is accurately incorporated into the database. It is important to elicit these rules during the analysis and design stage, since the captured rules are the basis for subsequent development of a business constraints repository. We present a taxonomy for set-based business rules, and describe an overarching framework for modeling rules that constrain the cardinality of sets. The proposed framework results in various types constraints, i.e., attribute, class, participation, projection, co-occurrence, appearance and overlapping, on a semantic model that supports abstractions like classification, generalization/specialization, aggregation and association. We formally define the syntax of our proposed framework in Backus-Naur Form and explicate the semantics using first-order logic. We describe partial ordering in the constraints and define the concept of metaconstraints, which can be used for automatic constraint consistency checking during the design stage itself. We demonstrate the practicality of our approach with a case study and show how our approach to modeling business rules seamlessly integrates into existing database design methodology. Via our proposed framework, we show how explicitly capturing data semantics will help bridge the semantic gap between the real world and its representation in an information system.  相似文献   

9.
Given a set ofn events (or jobs) which are constrained by a precedence relation, we want to order them into a totally ordered sequence (i. e., one machine schedule). Each event has an integer cost (which may be negative) associated with it, and our objective is to minimize the maximum cumulative cost within a schedule, i. e., to obtain a cumulative cost-optimal schedule. We introduce the concept of “strict optimality” and investigate properties of strictly optimal schedules. The usefulness of these schedules is demonstrated in the special case where the precedence relation is “series-parallel”. For this case we describe an algorithm which finds a cumulative cost-optimal schedule inO (n logn) time.  相似文献   

10.
The dynamic nature of mobile ad hoc networks poses fundamental challenges to the design of service composition schemes that can satisfy the end-to-end quality of service requirements and minimize the effect of service disruptions caused by dynamic link and node failures. Although existing research on mobile ad hoc networks has focused on improving reliability, little existing work has considered service deliveries spanning multiple components. Moreover, service composition strategies proposed for wireline networks (such as the Internet) are poorly suited for highly dynamic wireless ad hoc networks.This paper proposes a new service composition and recovery framework designed to achieve minimum service disruptions for mobile ad hoc networks. The framework consists of two tiers: service routing, which selects the service components that support the service path, and network routing, which finds the optimal network path that connects these service components. Our framework is based on the disruption index, which is a novel concept that characterizes different service disruption aspects, such as frequency and duration, that are not captured adequately by conventional metrics, such as reliability and availability.Using the definition of disruption index, we formulate the problem of minimum-disruption service composition and recovery (MDSCR) as a dynamic programming problem and analyze the properties of its optimal solution for ad hoc networks with known mobility plan. Based on the derived analytical insights, we present our MDSCR heuristic algorithm for ad hoc networks with uncertain node mobility. This heuristic algorithm approximates the optimal solution with one-step lookahead prediction, where service link lifetime is predicted based on node location and velocity using linear regression. We use simulations to evaluate the results of our algorithm in various network environments. The results validate that our algorithm can achieve better performance than conventional methods.  相似文献   

11.
Finding a path that satisfies multiple Quality-of-Service (QoS) constraints is vital to the deployment of current emerged services. However, existing algorithms are not very efficient and effective at finding such a path. Moreover, few works focus on three or more QoS constraints. In this paper, we present an enhanced version of fully polynomial time approximation scheme (EFPTAS) for multiconstrainted path optimal (MCOP) problem. Specifically, we make four major contributions. We first allow the proposed algorithm to construct an auxiliary graph, through which the QoS parameters on each of the finding path can be guaranteed not to exceed the given constraints. Then we adopt a concept, called nonlinear definition of path constraints in EFPTAS for reducing both time and space complexity. Also, we enable EFPTAS to run iteratively to facilitate a progressive refinement of the finding result. In addition to these, we identify some “deployment” issues for proposed algorithm, the essential steps that how and when the EFPTAS takes place are presented. By analyzing the proposed algorithm theoretically, we find that the presented EFPTAS can find a (1+ε)-approximation path in the network with time complexity O(|E||V|/ε) (where |E| is the number of edges and |V| is the number of nodes), which outperforms the previous best-known algorithm for MCOP. We conduct an extensive comparison between the algorithm presented in this paper and previous best-known study experimentally, our results indicate that EFPTAS can find a path with low complexity and preferable quality.  相似文献   

12.
《Real》2001,7(3):255-273
Video delivery from a server to a client across a network is an important component of many multimedia applications. While delivering a video stream across a resource constrained network, loss of frames may be unavoidable. Under such circumstances, it is desirable to find a server transmission schedule that can efficiently utilize the network resources while maximizing the perceived quality-of-service (QoS) at the client. To address this issue, in this paper we introduce the notion ofselective frame discard at the server and formulate the optimal selective frame discard problem using a QoS-based cost function. Given network bandwidth and client buffer constraints, we develop an O (N log N) algorithm to find the minimum number of frames that must be discarded in order to meet these constraints. The correctness of the algorithm is also formally established. We present a dynamic programming based algorithm for solving the problem of optimal selective frame discard. Since the computational complexity of the optimal algorithm is prohibitively high in general, we also develop several efficient heuristic algorithms for selective frame discard. These algorithms are evaluated using JPEG and MPEG video traces.  相似文献   

13.
In this paper, we propose Self-Adapting Recovery Net (SARN), an extended Petri net model, for specifying exceptional behavior in business processes. SARN adapts the structure of the underlying Petri net at run time to handle exceptions while keeping the Petri net design easy. The proposed framework caters for the specification of high-level recovery policies that are incorporated either with a single task or a set of tasks, called a Recovery Region. These recovery policies are generic directives that model exceptions at design time together with a set of primitive operations used at run time to handle the occurrence of exceptions. We identified a set of recovery policies that are useful and commonly needed in many practical situations. A tool has been developed to illustrate the viability of the proposed exception handling technique. B. Medjahed’s work is supported by a grant from the University of Michigan’s OVPR.  相似文献   

14.
We consider the problem of retrieving consistent answers over databases that might be inconsistent with respect to a set of integrity constraints. In particular, we concentrate on sets of constraints that consist of key dependencies, and we give an algorithm that computes the consistent answers for a large and practical class of conjunctive queries. Given a query q, the algorithm returns a first-order query Q (called a query rewriting) such that for every (potentially inconsistent) database I, the consistent answers for q can be obtained by evaluating Q directly on I.  相似文献   

15.
We consider two problems that arise in designing two-level star networks taking into account service quality considerations. Given a set of nodes with pairwise traffic demand and a central hub, we select p hubs and connect them to the central hub with direct links and then we connect each nonhub node to a hub. This results in a star/star network. In the first problem, called the Star p-hub Center Problem, we would like to minimize the length of the longest path in the resulting network. In the second problem, Star p-hub Median Problem with Bounded Path Lengths, the aim is to minimize the total routing cost subject to upper bound constraints on the path lengths. We propose formulations for these problems and report the outcomes of a computational study where we compare the performances of our formulations.  相似文献   

16.
The Resource Constrained Project Scheduling Problem is one of the most intensively investigated scheduling problems. It requires scheduling a set of interrelated activities, while considering precedence relationships, and limited renewable resources allocation. The objective is to minimize the project duration. We propose a new destructive lower bound for this challenging ${\mathcal {NP}}$ -hard problem. Starting from a previously suggested LP model, we propose several original valid inequalities that aim at tightening the model representation. These new inequalities are based on precedence constraints, incompatible activity subsets, and nonpreemption constraints. We present the results of an extensive computational study that was carried out on 2,040 benchmark instances of PSPLIB, with up to 120 activities, and that provide strong evidence that the new proposed lower bound exhibits an excellent performance. In particular, we report the improvement of the best known lower bounds of 5 instances.  相似文献   

17.
Combining constraints using logical connectives such as disjunction is ubiquitous in constraint programming, because it adds considerable expressive power to a constraint language. We explore the solver architecture needed to propagate such combinations of constraints efficiently. In particular we describe two new features named satisfying sets and constraint trees. We also make use of movable triggers (Gent et al., 2006) [1], and with these three complementary features we are able to make considerable efficiency gains.A key reason for the success of Boolean Satisfiability (SAT) solvers is their ability to propagate Or constraints efficiently, making use of movable triggers. We successfully generalise this approach to an Or of an arbitrary set of constraints, maintaining the crucial property that at most two constraints are active at any time, and no computation at all is done on the others. We also give an And propagator within our framework, which may be embedded within the Or. Using this approach, we demonstrate speedups of over 10,000 times in some cases, compared to traditional constraint programming approaches. We also prove that the Or algorithm enforces generalised arc consistency (GAC) when all its child constraints have a GAC propagator, and no variables are shared between children. By extending the Or propagator, we present a propagator for AtLeastK, which expresses that at least k of its child constraints are satisfied in any solution.Some logical expressions (e.g. exclusive-or) cannot be compactly expressed using And, Or and AtLeastK. Therefore we investigate reification of constraints. We present a fast generic algorithm for reification using satisfying sets and movable triggers.  相似文献   

18.
We present an algorithmic approach for solving large-scale two-stage stochastic problems having mixed 0–1 first stage variables. The constraints in the first stage of the deterministic equivalent model have 0–1 variables and continuous variables, while the constraints in the second stage have only continuous. The approach uses the twin node family concept within the algorithmic framework, the so-called branch-and-fix coordination, in order to satisfy the nonanticipativity constraints. At the same time we consider a scenario cluster Benders decomposition scheme for solving large-scale LP submodels given at each TNF integer set. Some computational results are presented to demonstrate the efficiency of the proposed approach.  相似文献   

19.
This paper analyzes a repairable M/M/1/N queueing system under a threshold-based recovery policy. The threshold-based recovery policy means that the server breaks down only if there is at least one customer in the system, and the recovery can be performed when q (1 ≤ q ≤ N) or more customers are present. For this queueing system, a recursive method is applied to obtain steady-state solutions in neat closed-form expressions. We then develop some important system characteristics, such as the number of customers in the system, the probability that the server is busy, the effective arrival rate and the expected waiting time in the system, etc. A cost model is constructed to determine the optimal threshold value, the optimal system capacity and the optimal service rate at a minimum cost. In order to solve this optimization problem, the direct search method and Newton's method are employed. Sensitivity analysis is also conducted with various system parameters. Finally, we present some managerial insights through an application example.  相似文献   

20.
Semistructured data occur in situations where information lacks a homogeneous structure and is incomplete. Yet, up to now the incompleteness of information has not been reflected by special features of query languages. Our goal is to investigate the principles of queries that allow for incomplete answers. We do not present, however, a concrete query language. Queries over classical structured data models contain a number of variables and constraints on these variables. An answer is a binding of the variables by elements of the database such that the constraints are satisfied. In the present paper, we loosen this concept in so far as we allow also answers that are partial; that is, not all variables in the query are bound by such an answer. Partial answers make it necessary to refine the model of query evaluation. The first modification relates to the satisfaction of constraints: in some circumstances we consider constraints involving unbound variables as satisfied. Second, in order to prevent a proliferation of answers, we only accept answers that are maximal in the sense that there are no assignments that bind more variables and satisfy the constraints of the query. Our model of query evaluation consists of two phases, a search phase and a filter phase. Semistructured databases are essentially labeled directed graphs. In the search phase, we use a query graph containing variables to match a maximal portion of the database graph. We investigate three different semantics for query graphs, which give rise to three variants of matching. For each variant, we provide algorithms and complexity results. In the filter phase, the maximal matchings resulting from the search phase are subjected to constraints, which may be weak or strong. Strong constraints require all their variables to be bound, while weak constraints do not. We describe a polynomial algorithm for evaluating a special type of queries with filter constraints, and assess the complexity of evaluating other queries for several kinds of constraints. In the final part, we investigate the containment problem for queries consisting only of search constraints under the different semantics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号