首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Probabilistic annotations generalise standard Hoare Logic [20] to quantitative properties of probabilistic programs. They can be used to express critical expected values over program variables that must be maintained during program execution. As for standard program development, probabilistic assertions can be checked mechanically relative to an appropriate program semantics. In the case that a mechanical prover is unable to complete such validity checks then a counterexample to show that the annotation is incorrect can provide useful diagnostic information. In this paper, we provide a definition of counterexamples as failure traces for probabilistic assertions within the context of the pB language [19], an extension of the standard B method [1] to cope with probabilistic programs. In addition, we propose algorithmic techniques to find counterexamples where they exist, and suggest a ranking mechanism to return ‘the most useful diagnostic information’ to the pB developer to aid the resolution of the problem.  相似文献   

2.
Service systems are in significant danger of terrorist attacks aimed at disrupting their critical components. These attacks seek to exterminate vital assets such as transportation networks, services, and supplies. In the present paper, we propose a multi-period planning based on capacity recovery to allocate fortification/interdiction resources in a service system. The problem involves a dynamic Stackelberg game between a defender (leader) and an attacker (follower). The decisions of the defender are the services provided to customers and the fortification resources allocated to facilities in each period as the total demand-weighted distances are minimized. Following this, the attacker allocates interdiction resources to facilities that resulted in the service capacity reduction in each period. In this model, excess fortification/interdiction budgets and capacity in one period can be used in the next period. Moreover, facilities have a predefined capacity to serve the customers with varying demands during the time horizon. To solve this problem, two different types of approaches are implemented and compared. The first method is an exact reformulation algorithm based on the decomposition of the problem into a restricted master problem (RMP) and a slave problem (SP). The second one is a high performance metaheuristic algorithm, i.e., genetic algorithm (GA) developed to overcome the decomposition method’s impracticability on large-scale problem instances. We also compare the results with some novel metaheuristic algorithms such as teaching learning based optimization (TLBO) and dragonfly algorithm (DA). Computational results show the superiority of GA against TLBO and DA.  相似文献   

3.
In this paper, we propose a privacy-preserving method to determine the number of distinct users who connected to one or more entry points of a distributed Internet service with multiple service operators. The problem is motivated by the anonymization network Tor, and the difficulties that arise when aiming to estimate the number of Tor users. We present a way to perform distributed user counting with accurate estimates and a high level of privacy protection, based on a probabilistic data structure. We start from a relatively naive approach, and analyze the level of privacy protection that it provides. Subsequently, we improve on this baseline mechanism, building upon the gained insights. In order to assess the privacy properties of the discussed techniques, we use a novel probabilistic analysis approach which compares an attacker’s a priori and a posteriori knowledge.  相似文献   

4.
This paper focuses on the protection issues for supply systems involving random attacks, which are described as attacks whose targets cannot be predicted. We present the random-attack median fortification problem (RAMF) to identify the fortification strategy that minimizes the expected operation cost after random attacks. RAMF is formulated as an integer-linear program and solved directly using general-purpose MIP solver. Moreover, a more complex problem, the fortification median problem for disruptions caused by mixed types of attacks (FMMA), is introduced to find a balance between defending the worst-case attacks and random attacks. Solving FMMA can achieve good protection results, which are more practical in dealing with systems with mixed types of attacks, if the proportion between the types is properly estimated. We formulate FMMA as a non-linear bilevel program and extend a typical implicit enumeration (IE) algorithm to solve the problem. Finally, computational experiments demonstrate the effectiveness of both RAMF and FMMA in dealing with protective affairs involving random attacks. The efficiency of solving the formulations of RAMF and FMMA is also testified.  相似文献   

5.
Many real-life critical systems are described with large models and exhibit both probabilistic and non-deterministic behaviour. Verification of such systems requires techniques to avoid the state space explosion problem. Symbolic model checking and compositional verification such as assume-guarantee reasoning are two promising techniques to overcome this barrier. In this paper, we propose a probabilistic symbolic compositional verification approach (PSCV) to verify probabilistic systems where each component is a Markov decision process (MDP). PSCV starts by encoding implicitly the system components using compact data structures. To establish the symbolic compositional verification process, we propose a sound and complete symbolic assume-guarantee reasoning rule. To attain completeness of the symbolic assume-guarantee reasoning rule, we propose to model assumptions using interval MDP. In addition, we give a symbolic MTBDD-learning algorithm to generate automatically the symbolic assumptions. Moreover, we propose to use causality to generate small counterexamples in order to refine the conjecture assumptions. Experimental results suggest promising outlooks for our probabilistic symbolic compositional approach.  相似文献   

6.
We present properties of multihead two-way probabilistic finite automata that parallel those of their deterministic and nondeterministic counterparts. We define multihead probabilistic finite automata withlogspace constructible transition probabilities, and we describe a technique to simulate these automata by standard logspace probabilistic Turing machines. Next, we represent logspace probabilistic complexity classes as proper hierarchies based on corresponding multihead two-way probabilistic finite automata, and we show their (deterministic logspace) reducibility to the second levels of these hierarchies. We obtain a simple formula for the maximum inherent bandwidth of the configuration transition matrices associated with thek-head probabilistic finite automata processing a length-n input string. (The inherent bandwidth of the configuration transition matrices associated with an automaton processing a length-n input string is the smallest bandwidth we can get by changing the enumeration order of the automaton’s configurations.) Partially based on this relation, we find an apparently easier logspace complete problem forPL (the class of languages recognized by logspace unbounded-error probabilistic Turing machines), and we discuss possibilities for a space-efficient deterministic simulation of probabilistic automata.  相似文献   

7.
The T-fuzzy n-ary subhypergroups of an n-ary hypergroup are defined by using triangular norms and some related properties are hence obtained. In particular, we consider the probabilistic version of n-ary hypergroups by using random sets and show that the fuzzy n-ary hypergroups defined by triangular norms are consequences of some probabilistic n-ary hypergroups under certain conditions. Some results on n-ary hypergroups recently given by Davvaz and Corsini are extended.  相似文献   

8.
Multi-Agent Systems (MASs) have long been modeled through knowledge and social commitments independently. In this paper, we present a new method that merges the two concepts to model and verify MASs in the presence of uncertainty. To express knowledge and social commitments simultaneously in uncertain settings, we define a new multi-modal logic called Probabilistic Computation Tree Logic of Knowledge and Commitments (PCTLkc in short) which combines two existing probabilistic logics namely, probabilistic logic of knowledge PCTLK and probabilistic logic of commitments PCTLC. To model stochastic MASs, we present a new version of interpreted systems that captures the probabilistic behavior and accounts for the communication between interacting components. Then, we introduce a new probabilistic model checking procedure to check the compliance of target systems against some desirable properties written in PCTLkc and report the obtained verification results. Our proposed model checking technique is reduction-based and consists in transforming the problem of model checking PCTLkc into the problem of model checking a well established logic, namely PCTL. So doing provides us with the privilege of re-using the PRISM model checker to implement the proposed model checking approach. Finally, we demonstrate the effectiveness of our approach by presenting a real case study. This framework can be considered as a step forward towards closing the gap of capturing interactions between knowledge and social commitments in stochastic agent-based systems.  相似文献   

9.
Vulnerability to sudden service disruptions due to deliberate sabotage and terrorist attacks is one of the major threats of today. In this paper, we present a bilevel formulation of the r-interdiction median problem with fortification (RIMF). RIMF identifies the most cost-effective way of allocating protective resources among the facilities of an existing but vulnerable system so that the impact of the most disruptive attack to r unprotected facilities is minimized. The model is based upon the classical p-median location model and assumes that the efficiency of the system is measured in terms of accessibility or service provision costs. In the bilevel formulation, the top level problem involves the decisions about which facilities to fortify in order to minimize the worst-case efficiency reduction due to the loss of unprotected facilities. Worst-case scenario losses are modeled in the lower-level interdiction problem. We solve the bilevel problem through an implicit enumeration (IE) algorithm, which relies on the efficient solution of the lower-level interdiction problem. Extensive computational results are reported, including comparisons with earlier results obtained by a single-level approach to the problem.  相似文献   

10.
As large data centers emerge, which host multiple Web applications, it is critical to isolate different application environments for security reasons and to provision shared resources effectively and efficiently to meet different service quality targets at minimum operational cost. To address this problem, we developed a novel architecture of resource management framework for multi-tier applications based on virtualization mechanisms. Key techniques presented in this paper include (1) establishment of the analytic performance model which employs probabilistic analysis and overload management to deal with non-equilibrium states; (2) a general formulation of the resource management problem which can be solved by incorporating both deterministic and stochastic optimizing algorithms; (3) deployment of virtual servers to partition resource at a much finer level; and (4) investigation of the impact of the failure rate to examine the effect of application isolation. Simulation experiments comparing three resource allocation schemes demonstrate the advantage of our dynamic approach in providing differentiated service qualities, preserving QoS levels in failure scenarios and also improving the overall performance while reducing the resource usage cost.  相似文献   

11.
The flexibility of XML data model allows a more natural representation of uncertain data compared with the relational model. Matching twig pattern against XML data is a fundamental problem in querying information from XML documents. For a probabilistic XML document, each twig answer has a probabilistic value because of the uncertainty of data. The twig answers that have small probabilistic value are useless to the users, and usually users only want to get the answers with the k largest probabilistic values. To this end, existing algorithms for ordinary XML documents cannot be directly applicable due to the need for handling probability distributional nodes and efficient calculation of top-k probabilities of answers in probabilistic XML. In this paper, we address the problem of finding twig answers with top-k probabilistic values against probabilistic XML documents directly. We propose a new encoding scheme called PEDewey for probabilistic XML in this paper. Based on this encoding scheme, we then design two algorithms for finding answers of top-k probabilities for twig queries. One is called ProTJFast, to process probabilistic XML data based on element streams in document order, and the other is called PTopKTwig, based on the element streams ordered by the path probability values. Experiments have been conducted to study the performance of these algorithms.  相似文献   

12.
A certain model of organizing protection mechanisms in distributed information system is proposed; implementation of the model allows for improving both time-related and probabilistic characteristics of protection systems. In addition, new detection algorithm for attacks against resources of information system is suggested based on the notion of critical threat domain.  相似文献   

13.
The capacitated continuous location–allocation problem, also called capacitated multisource Weber problem (CMWP), is concerned with locating m facilities in the Euclidean plane, and allocating their capacity to n customers at minimum total cost. The deterministic version of the problem, which assumes that customer locations and demands are known with certainty, is a nonconvex optimization problem. In this work, we focus on a probabilistic extension referred to as the probabilistic CMWP (PCMWP), and consider the situation in which customer locations are randomly distributed according to a bivariate probability distribution. We first formulate the discrete approximation of the problem as a mixed-integer linear programming model in which facilities can be located on a set of candidate points. Then we present three heuristics to solve the problem. Since optimal solutions cannot be found, we assess the performance of the heuristics using the results obtained by an alternate location–allocation heuristic that is originally developed for the deterministic version of the problem and tailored by us for the PCMWP. The new heuristics depend on the evaluation of the expected distances between facilities and customers, which is possible only for a few number of distance function and probability distribution combinations. We therefore propose approximation methods which make the heuristics applicable for any distance function and probability distribution of customer coordinates.  相似文献   

14.
15.
A loosely coupled multiprocessor system contains multiple processors which have their own local memories. To balance the load among multiple processors is of fundamental importance in enhancing the performance of such a multiple processor system. Probabilistic load balancing in a heterogeneous multiple processor system with many job classes is considered in this study. The load balancing scheme is formulated as a nonlinear programming problem with linear constraints. An optimal probabilistic load balancing algorithm is proposed to solve this nonlinear programming problem. The proposed load balancing method is proven globally optimum in the sense that it results in a minimum overall average job response time on a probabilistic basis.  相似文献   

16.
There is a growing concern about anonymity and privacy on the Internet, resulting in lots of work on formalization and verification of anonymity. In particular, the importance of probabilistic aspects of anonymity has recently been highlighted by many authors. Several different notions of “probabilistic anonymity” have been studied so far, but proof methods for such probabilistic notions have not yet been elaborated. In this paper we introduce a simulation-based proof method for one notion of probabilistic anonymity introduced by Bhargava and Palamidessi, called strong probabilistic anonymity. The method is a probabilistic adaptation of the one by Kawabe, Sakurada et al. for non-deterministic anonymity; anonymity of a protocol is proved by finding a forward/backward simulation between certain automata. For the jump from non-determinism to probability we exploit a generic, coalgebraic theory of traces and simulations developed by Hasuo, Jacobs and Sokolova. In particular, an appropriate notion of probabilistic simulation is obtained as an instantiation of the generic definition, for which soundness theorem comes for free. Additionally, we show how we can use a similar idea to verify a weaker notion of probabilistic anonymity called probable innocence.  相似文献   

17.
With the increasing importance of XML, LDAP directories, and text-based information sources on the Internet, there is an ever-greater need to evaluate queries involving (sub)string matching. In many cases, matches need to be on multiple attributes/dimensions, with correlations between the multiple dimensions. Effective query optimization in this context requires good selectivity estimates. In this paper, we use pruned count-suffix trees (PSTs) as the basic data structure for substring selectivity estimation. For the 1-D problem, we present a novel technique called MO (Maximal Overlap). We then develop and analyze two 1-D estimation algorithms, MOC and MOLC, based on MO and a constraint-based characterization of all possible completions of a given PST. For the k-D problem, we first generalize PSTs to multiple dimensions and develop a space- and time-efficient probabilistic algorithm to construct k-D PSTs directly. We then show how to extend MO to multiple dimensions. Finally, we demonstrate, both analytically and experimentally, that MO is both practical and substantially superior to competing algorithms. Received April 28, 2000 / Accepted July 11, 2000  相似文献   

18.
Disorders caused by deliberate sabotage and terrorist attacks have always been considered as a major threat by the governments. Hence, identifying and planning for strengthening of critical facilities have become a priority for more security and safety. This paper presents a bi-level formulation of the r-interdiction median problem with fortification for critical hierarchical facilities. In the developed bi-level formulation, the defender, as the leader, decides to protect a certain number of facilities in each level of the hierarchical system in order to minimize the impact of the most disruptive attacks to unprotected facilities. On the other hand the attacker, as the follower, with full information about protected facilities, makes his interdiction plan to maximize the total post-attack cost incurred to the defender. We develop three metaheuristic algorithms and an exhaustive enumeration method to solve the introduced problem. Extensive computational tests on a set of randomly generated instances demonstrate the effectiveness of the developed algorithms.  相似文献   

19.
In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While ‘multi-scale’ refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, ‘multi-resolution’ is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms’ performance, we rank them by weighing the distinct computational time savings of the simulation runs vs. the methods’ ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods. The main finding of this work is that by pursuing a multi-resolution approach, one can reduce the computation time of a discrete-based model substantially while still maintaining a comparably high predictive power. This hints at even more computational savings in the more realistic 3D setting over time, and thus appears to outline a possible path to achieve scalability for the all-important clinical translation.  相似文献   

20.
Coherence graphs     
We study the consistency of a number of probability distributions, which are allowed to be imprecise. To make the treatment as general as possible, we represent those probabilistic assessments as a collection of conditional lower previsions. The problem then becomes proving Walley's (strong) coherence of the assessments. In order to maintain generality in the analysis, we assume to be given nearly no information about the numbers that make up the lower previsions in the collection. Under this condition, we investigate the extent to which the above global task can be decomposed into simpler and more local ones. This is done by introducing a graphical representation of the conditional lower previsions that we call the coherence graph: we show that the coherence graph allows one to isolate some subsets of the collection whose coherence is sufficient for the coherence of all the assessments; and we provide a polynomial-time algorithm that finds the subsets efficiently. We show some of the implications of our results by focusing on three models and problems: Bayesian and credal networks, of which we prove coherence; the compatibility problem, for which we provide an optimal graphical decomposition; probabilistic satisfiability, of which we show that some intractable instances can instead be solved efficiently by exploiting coherence graphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号