首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Some of the current best conformant probabilistic planners focus on finding a fixed length plan with maximal probability. While these approaches can find optimal solutions, they often do not scale for large problems or plan lengths. As has been shown in classical planning, heuristic search outperforms bounded length search (especially when an appropriate plan length is not given a priori). The problem with applying heuristic search in probabilistic planning is that effective heuristics are as yet lacking.In this work, we apply heuristic search to conformant probabilistic planning by adapting planning graph heuristics developed for non-deterministic planning. We evaluate a straight-forward application of these planning graph techniques, which amounts to exactly computing a distribution over many relaxed planning graphs (one planning graph for each joint outcome of uncertain actions at each time step). Computing this distribution is costly, so we apply Sequential Monte Carlo (SMC) to approximate it. One important issue that we explore in this work is how to automatically determine the number of samples required for effective heuristic computation. We empirically demonstrate on several domains how our efficient, but sometimes suboptimal, approach enables our planner to solve much larger problems than an existing optimal bounded length probabilistic planner and still find reasonable quality solutions.  相似文献   

2.
Computing utilities are emerging as an important part of the infrastructure for outsourcing computer services. Fundamental to outsourcing is the notion of quality of service, which is defined by service level agreements (SLAs) between the computing utilities and clients. One of the major objectives of computing utilities is to maximize their net profit while maintaining customer loyalty. To achieve this objective, the computing utilities should meet or exceed their SLA constraints most of the time. Defining the SLAs conservatively might be one way of easily achieving these goals. However, by tuning the SLA parameters conservatively the computing utility might under utilize its resources with a resultant loss of revenue. Therefore, we can see two main issues with SLA management: designing SLAs competitively so that expected revenue for the computing utility is maximized and maintaining the operating conditions such that SLAs are satisfied with very high probability. In this paper, we show that inducting unreliable public resources into a computing utility enables more competitive SLAs while maintaining higher levels of run time compliances as well as maximizing profit. Our scheduling algorithms assume that idle cycles from public resources are available in plenty, therefore, the performance gains do not incur any additional financial cost. However, there is communication overhead when public resources from a wide area network is included. This overhead is kept to the minimum by enabling the scheduler work without any monitoring on the public resources.  相似文献   

3.
4.
基于蒙特卡罗方法的海洋环境不确定性仿真   总被引:1,自引:1,他引:1  
黄海  笪良龙  张林 《计算机仿真》2007,24(9):308-311
海洋中的声速是研究声波在海洋中传播以及水声战的基本物理量之一.由于海洋环境的复杂性,使得声速梯度存在一定的不确定性,进而导致声场计算的不确定性以及声纳作用距离预报的不确定性.采用蒙特卡洛方法捕捉海洋中声速梯度的不确定性,模拟真实海洋环境中的声速梯度,并将模拟得到的声速梯度输入波束位移射线简正波(BDRM)声场预报模型中,得到声传播损失,再对结果进行统计分析,得到声传播损失的概率分布图,并提出用传播损失期望值作为预报值,提高了声场预报的精度.最后,利用声纳优质因子(FOM)导出声纳作用距离概率分布图,为声纳作用距离的预报提供了概率的依据.  相似文献   

5.
Acquiring 3-D models from sequences of contours   总被引:5,自引:0,他引:5  
This paper explores shape from contour for acquiring 3-D graphics models. In this method, a continuous sequence of images is taken as an object rotates. A smooth convex shape can be estimated instantaneously from its contour and by the first derivative of contour movement (trace of contour, or contour distribution with time). We also analyze shapes that do not satisfy the conditions of smoothness and visibility, which are indispensable for modeling an object. A region that does not expose as contour yields a nonsmoothness in the tracked contour movement. We can thus detect such a region by contour distribution filtering and extract its accurate location by computing the left and right derivatives of the distribution. This has not been studied previously. These unknown regions are obtained for further investigation using other visual cues. A general approach for building a geometrical object model using contours is then described. The entire process from silhouettes to a 3-D model is based local computation; this is promising for producing shapes in real time. Our direct goal is to establish 3-D graphics models of human faces for the growing needs of visual communications. We have obtained some good results  相似文献   

6.
Problems of computational actuarial mathematics, dynamic financial analysis, and optimization of insurance business and the possibility of their solution by means of parallel computing on graphics accelerators are discussed. The ruin probability and other performance criteria of an insurance company are estimated by the Monte Carlo method. In many cases, it is the only applicable method. Since the ruin probability is small enough, to achieve an acceptable estimate accuracy, an astronomical number of simulations may be required. Parallelization of the Monte Carlo method and the use of graphical accelerators allow us getting the desired result in a reasonable time. The results of numerical experiments on the developed system of actuarial modeling are presented, allowing the use of graphical accelerator that supports Nvidia CUDA 1.3 and higher.  相似文献   

7.
From a Bayesian decision theoretic framework, we show that the reason why the usual statistical approaches do not take context into account is because of the assumptions made on the joint prior probability function and because of the simplistic loss function chosen. We illustrate how the constraints sometimes employed by artificial intelligence researchers constitute a different kind of assumption on the joint prior probability function. We discuss a couple of loss functions which do take context into account and when combined with the joint prior probability constraint create a decision problem requiring a combinatorial state space search. We also give a theory for how probabilistic relaxation works from a Bayesian point of view.  相似文献   

8.
In this paper, we address the new problem of protecting volunteer computing systems from malicious volunteers who submit erroneous results, by presenting sabotage-tolerance mechanisms that work without depending on checksums or cryptographic techniques. We first analyze the traditional technique of voting and show how it reduces error rates exponentially with redundancy, but requires all work to be done several times, and does not work well when there are many saboteurs. We then present a new technique called spot-checking which reduces the error rate linearly (i.e. inversely) with the amount of work to be done, while only costing an extra fraction of the original time. Integrating these mechanisms, we then present the new idea of credibility-based fault-tolerance, wherein we estimate the conditional probability of results and workers being correct, based on the results of using voting, spot-checking and other techniques, and then use these probability estimates to direct the use of further redundancy. Using this technique, we are able to attain mathematically guaranteeable levels of correctness, and do so with much smaller slowdown than possible with voting or spot-checking alone. Finally, we validate these new ideas with Monte Carlo simulations, and discuss other possible variations of these techniques.  相似文献   

9.
Desktop Grid systems reached a preeminent place among the most powerful computing platforms in the planet. Unfortunately, they are extremely vulnerable to mischief, because computing projects exert no administrative or technical control on volunteers. These can very easily output bad results, due to software or hardware glitches (resulting from over-clocking for instance), to get unfair computational credit, or simply to ruin the project. To mitigate this problem, Desktop Grid servers replicate work units and apply majority voting, typically on 2 or 3 results. In this paper, we observe that simple majority voting is powerless against malicious volunteers that collude to attack the project. We argue that to identify this type of attack and to spot colluding nodes, each work unit needs at least 3 voters. In addition, we propose to post-process the voting pools in two steps. i) In the first step, we use a statistical approach to identify nodes that were not colluding, but submitted bad results; ii) then, we use a rather simple principle to go after malicious nodes which acted together: they might have won conflicting voting pools against nodes that were not identified in step i. We use simulation to show that our heuristic can be quite effective against colluding nodes, in scenarios where honest nodes form a majority.  相似文献   

10.
Concepts of a discrete Markov flow and discrete-time Markov service are defined. A single-server finite-capacity queueing system with Markov flow and discrete-time service is studied. An algorithm for computing the stationary state probability distribution of the system is designed. The main performance characteristics, such as the stationary loss probability and mean waiting time for service, are determined.Translated from Avtomatika i Telemekhanika, No. 2, 2005, pp. 73–91.Original Russian Text Copyright © 2005 by Bocharov, Viskova.This work was supported in part by the Russian Foundation for Basic Research, project no. 02-07-90147.  相似文献   

11.
12.
Internet based volunteer computing projects such as SETI@home are currently restricted to performing coarse grained, embarrassingly parallel master-worker style tasks. This is partly due to the “pull” nature of task distribution in volunteer computing environments, where workers request tasks from the master rather than the master assigning tasks to arbitrary workers. In this paper we propose algorithms for computing batches of medium grained tasks with deadlines in pull-style volunteer computing environments. We develop models of unreliable workers based on analysis of trace data from an actual volunteer computing project. These models are used to develop algorithms for task distribution in volunteer computing systems with a high probability of meeting batch deadlines. We develop algorithms for perfectly reliable workers, computation-reliable workers and unreliable workers. Finally, we demonstrate the effectiveness of the algorithms through simulations using traces from actual volunteer computing environments.  相似文献   

13.
Given a set of points with uncertain locations, we consider the problem of computing the probability of each point lying on the skyline, that is, the probability that it is not dominated by any other input point. If each point’s uncertainty is described as a probability distribution over a discrete set of locations, we improve the best known exact solution. We also suggest why we believe our solution might be optimal. Next, we describe simple, near-linear time approximation algorithms for computing the probability of each point lying on the skyline. In addition, some of our methods can be adapted to construct data structures that can efficiently determine the probability of a query point lying on the skyline.  相似文献   

14.
The problem of ruin probability minimization in the Cramer-Lundberg risk model under excess reinsurance is studied. Together with traditional maximization of the Lundberg characteristic coefficient R is considered the problem of direct calculation of insurer’s ruin probability ? r (x) as an initial-capital function x under the prescribed level of net-retention r. To solve this problem, we propose the excess variant of the Cramer integral equation which is an equivalent to the Hamilton-Jacobi-Bellman equation. The continuation method is used for solving this equation; by means of it is found the analytical solution to the Markov risk model. We demonstrated on a series of standard examples that with any admissible value of x the ruin probability ? x (r): = ? r (x) is usually a unimodal function r. A comparison of the analytic representation of ruin probability ? r(x) with its asymptotic approximation with x → ∞ was conducted.  相似文献   

15.
In this article we describe an important structure used to model causal theories and a related problem of great interest to semi-empirical scientists. A causal Bayesian network is a pair consisting of a directed acyclic graph (called a causal graph) that represents causal relationships and a set of probability tables, that together with the graph specify the joint probability of the variables represented as nodes in the graph. We briefly describe the probabilistic semantics of causality proposed by Pearl for this graphical probabilistic model, and how unobservable variables greatly complicate models and their application. A common question about causal Bayesian networks is the problem of identifying causal effects from nonexperimental data, which is called the identifability problem. In the basic version of this problem, a semi-empirical scientist postulates a set of causal mechanisms and uses them, together with a probability distribution on the observable set of variables in a domain of interest, to predict the effect of a manipulation on some variable of interest. We explain this problem, provide several examples, and direct the readers to recent work that provides a solution to the problem and some of its extensions. We assume that the Bayesian network structure is given to us and do not address the problem of learning it from data and the related statistical inference and testing issues.  相似文献   

16.
Benjamin  Marion  David  Jochen J.  Dirk 《Neurocomputing》2008,71(7-9):1159-1171
The benefits of using intrinsic plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neuron's output towards an exponential distribution—thereby realizing an information maximization—have already been demonstrated. In this work, we extend the ideas of this adaptation method to a more commonly used non-linearity and a Gaussian output distribution. After deriving the learning rules, we show the effects of the bounded output of the transfer function on the moments of the actual output distribution. This allows us to show that the rule converges to the expected distributions, even in random recurrent networks. The IP rule is evaluated in a reservoir computing setting, which is a temporal processing technique which uses random, untrained recurrent networks as excitable media, where the network's state is fed to a linear regressor used to calculate the desired output. We present an experimental comparison of the different IP rules on three benchmark tasks with different characteristics. Furthermore, we show that this unsupervised reservoir adaptation is able to adapt networks with very constrained topologies, such as a 1D lattice which generally shows quite unsuitable dynamic behavior, to a reservoir that can be used to solve complex tasks. We clearly demonstrate that IP is able to make reservoir computing more robust: the internal dynamics can autonomously tune themselves—irrespective of initial weights or input scaling—to the dynamic regime which is optimal for a given task.  相似文献   

17.
In this paper, we consider how to search for a mobile evader in a large heterogeneous region when sensors are used for detection. Sensors are modeled using probability of detection. Due to environmental effects, this probability will not be constant over the entire region. We map this problem to a graph-search problem, and even though deterministic graph search is NP-complete, we derive a tractable optimal probabilistic search strategy. We do this by defining the problem as a dynamic game played on a Markov chain. We prove that this strategy is optimal in the sense of Nash. Simulations of an example problem illustrate our approach and verify our claims.  相似文献   

18.
Contaminant intrusion in a water distribution network is a complex but a commonly observed phenomenon, which depends on three elements – a pathway, a driving force and a contamination source. However, the data on these elements are generally incomplete, non-specific and uncertain. In an earlier work, Sadiq, Kleiner, and Rajani (2006) have successfully applied traditional Dempster–Shafer theory (DST) to estimate the “risk” of contaminant intrusion in a water distribution network based on limited uncertain information. However, the method used for generating basic probability assignment (BPA) was not very flexible, and did not handle and process uncertain information effectively. In this paper, a more pragmatic method is proposed that utilizes “soft” computing flexibility to generate BPAs from uncertain information. This paper compares these two methods through numerical examples, and demonstrates the efficiency and effectiveness of modified method.  相似文献   

19.
A statistical criterion is suggested for a variational problem on constructing an optimal trajectory of evasion of a movable object from detection by a stationary observer. Coordinates of the observer are assumed to be not known exactly but are distributed in the space in accordance with some a priori known probability distribution. The risk functional for the variational problem is taken to be expectation of an integral loss functional (depending on a random parameter) in the deterministic problem, which is calculated in this work.  相似文献   

20.
针对传统预留机制对本地任务服务质量造成的负面影响,提出了基于预留收益与损失均衡的资源预留机制。首先,基于本地任务相关统计特性利用概率论方法,给出并证明了一段时间内本地任务执行时间之和的概率分布函数。然后,在资源提供者的效用中考虑本地任务可能带来的损失,求解出了为保障本地任务服务质量的最低资源价格。实验结果表明,提出的预留机制能有效保障本地任务的服务质量,提高资源利用率、降低任务拒绝率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号