首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
仵博  吴敏 《计算机工程与设计》2007,28(9):2116-2119,2126
部分可观察马尔可夫决策过程是通过引入信念状态空间将非马尔可夫链问题转化为马尔可夫链问题来求解,其描述真实世界的特性使它成为研究随机决策过程的重要分支.介绍了部分可观察马尔可夫决策过程的基本原理和决策过程,然后介绍了3种典型的算法,它们分别是Littman等人的Witness算法、hcremental Pruning算法和Pineau等人的基于点的值迭代算法,对这3种算法进行了分析比较.讲述部分可观察马尔可夫决策过程的应用.  相似文献   

2.
王学宁  贺汉根  徐昕 《控制与决策》2004,19(11):1263-1266
针对部分可观测马氏决策过程(POMDP)中,由于感知混淆现象的存在,利用Sarsa等算法得到的无记忆策略可能发生振荡的现象,研究了一种基于记忆的强化学习算法——CPnSarsa(λ)学习算法来解决该问题.它通过重新定义状态,Agent结合观测历史来识别混淆状态.将CPnSarsa(λ)算法应用到一些典型的POMDP,最后得到的是最优或近似最优策略,与以往算法相比,该算法的收敛速度有了很大提高.  相似文献   

3.
This paper presents a real-time vision-based system to assist a person with dementia wash their hands. The system uses only video inputs, and assistance is given as either verbal or visual prompts, or through the enlistment of a human caregiver’s help. The system combines a Bayesian sequential estimation framework for tracking hands and towel, with a decision-theoretic framework for computing policies of action. The decision making system is a partially observable Markov decision process, or POMDP. Decision policies dictating system actions are computed in the POMDP using a point-based approximate solution technique. The tracking and decision making systems are coupled using a heuristic method for temporally segmenting the input video stream based on the continuity of the belief state. A key element of the system is the ability to estimate and adapt to user psychological states, such as awareness and responsiveness. We evaluate the system in three ways. First, we evaluate the hand-tracking system by comparing its outputs to manual annotations and to a simple hand-detection method. Second, we test the POMDP solution methods in simulation, and show that our policies have higher expected return than five other heuristic methods. Third, we report results from a ten-week trial with seven persons moderate-to-severe dementia in a long-term care facility in Toronto, Canada. The subjects washed their hands once a day, with assistance given by our automated system, or by a human caregiver, in alternating two-week periods. We give two detailed case study analyses of the system working during trials, and then show agreement between the system and independent human raters of the same trials.  相似文献   

4.
《Artificial Intelligence》2007,171(8-9):453-490
This study extends the framework of partially observable Markov decision processes (POMDPs) to allow their parameters, i.e., the probability values in the state transition functions and the observation functions, to be imprecisely specified. It is shown that this extension can reduce the computational costs associated with the solution of these problems. First, the new framework, POMDPs with imprecise parameters (POMDPIPs), is formulated. We consider (1) the interval case, in which each parameter is imprecisely specified by an interval that indicates possible values of the parameter, and (2) the point-set case, in which each probability distribution is imprecisely specified by a set of possible distributions. Second, a new optimality criterion for POMDPIPs is introduced. As in POMDPs, the criterion is to regard a policy, i.e., an action-selection rule, as optimal if it maximizes the expected total reward. The expected total reward, however, cannot be calculated precisely in POMDPIPs, because of the parameter imprecision. Instead, we estimate the total reward by adopting arbitrary second-order beliefs, i.e., beliefs in the imprecisely specified state transition functions and observation functions. Although there are many possible choices for these second-order beliefs, we regard a policy as optimal as long as there is at least one of such choices with which the policy maximizes the total reward. Thus there can be multiple optimal policies for a POMDPIP. We regard these policies as equally optimal, and aim at obtaining one of them. By appropriately choosing which second-order beliefs to use in estimating the total reward, computational costs incurred in obtaining such an optimal policy can be reduced significantly. We provide an exact solution algorithm for POMDPIPs that does this efficiently. Third, the performance of such an optimal policy, as well as the computational complexity of the algorithm, are analyzed theoretically. Last, empirical studies show that our algorithm quickly obtains satisfactory policies to many POMDPIPs.  相似文献   

5.
In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence the state of the conversation can never be known with certainty. Much of the research in spoken dialog systems centres on mitigating this uncertainty and recent work has focussed on three largely disparate techniques: parallel dialog state hypotheses, local use of confidence scores, and automated planning. While in isolation each of these approaches can improve action selection, taken together they currently lack a unified statistical framework that admits global optimization. In this paper we cast a spoken dialog system as a partially observable Markov decision process (POMDP). We show how this formulation unifies and extends existing techniques to form a single principled framework. A number of illustrations are used to show qualitatively the potential benefits of POMDPs compared to existing techniques, and empirical results from dialog simulations are presented which demonstrate significant quantitative gains. Finally, some of the key challenges to advancing this method – in particular scalability – are briefly outlined.  相似文献   

6.
针对连续时间部分可观Markov决策过程(CTPOMDP)的优化问题,本文提出一种策略梯度估计方法. 运用一致化方法,将离散时间部分可观Markov决策过程(DTPOMDP)的梯度估计算法推广到连续时间模型, 研究了算法的收敛性和误差估计问题,并用一个数值例子来说明该算法的应用.  相似文献   

7.
Verification of reachability properties for probabilistic systems is usually based on variants of Markov processes. Current methods assume an exact model of the dynamic behavior and are not suitable for realistic systems that operate in the presence of uncertainty and variability. This research note extends existing methods for Bounded-parameter Markov Decision Processes (BMDPs) to solve the reachability problem. BMDPs are a generalization of MDPs that allows modeling uncertainty. Our results show that interval value iteration converges in the case of an undiscounted reward criterion that is required to formulate the problems of maximizing the probability of reaching a set of desirable states or minimizing the probability of reaching an unsafe set. Analysis of the computational complexity is also presented.  相似文献   

8.
徐明  刘广钟 《计算机应用》2015,35(11):3047-3050
针对水声传感器网络低带宽、高延迟特性造成的空时不确定性以及网络状态不能充分观察的问题,提出一种基于部分可观测马尔可夫决策过程(POMDP)的水声传感器网络介质访问控制协议.该协议首先将每个传感器节点的链路质量和剩余能量划分为多个离散等级来表达节点的状态信息.此后,接收节点通过信道状态观测和接入动作的历史信息对信道的占用概率进行预测,从而得出发送节点的信道最优调度策略;发送节点按照该策略中的调度序列在各自所分配的时隙内依次与接收节点进行通信,传输数据包.通信完成后,相关节点根据网络转移概率的统计量估计下一个时隙的状态.仿真实验表明,与传统的水声传感器网络介质访问控制协议相比,基于POMDP的介质访问控制协议可以提高数据包传输成功率和网络吞吐量,并且降低网络的能量消耗.  相似文献   

9.
It is assumed that a system is described by the non-linear stochastic equation , where W is a vector of independent standard Wiener processes, q is a zero mean Poisson random measure on [0, ∞) × Rn , and υ is the control vector. Given an open and bounded domain D in Rn let and where Xt = Xt(υ) is the system's state vector k(·) is a given function and is the observation vector. The optimal control problem dealt with here is to find a feedback control law υ*εU such that for every υ in a class U of admissible control laws of the form υ = υ(t, z). By applying the calculus of variations, necessary conditions on optimal controls are derived, these conditions being given by a pair of coupled non-linear partial integro-differential equations. A stochastic Becond-ordcr system, describing a perturbed Duffing oscillator, is used as a test case, and two numerical methods for computing the optimal controls of the system are suggested, the efficiency and applicability of the second method being demonstrated with the aid of examples.  相似文献   

10.
张汝波  孟雷  史长亭 《计算机应用》2015,35(8):2375-2379
针对智能水下机器人(AUV)软件故障修复过程中存在的修复代价过高和系统环境只有部分可观察的问题,提出了一种基于微重启技术和部分客观马尔可夫决策(POMDP)模型的AUV软件故障修复方法。该方法结合AUV软件系统分层结构特点,构建了基于微重启的三层重启结构,便于细粒度的自修复微重启策略的实施;并依据部分可观马尔可夫决策过程理论,给出AUV软件自修复POMDP模型,同时采用基于点的值迭代(PBVI)算法求解生成修复策略,以最小化累积修复代价为目标,使系统在部分可观环境下能够以较低的修复代价执行修复动作。仿真实验结果表明,基于微重启技术和POMDP模型的AUV软件故障修复方法能够解决由软件老化及系统调用引起的AUV软件故障,同与两层微重启策略和三层微重启固定策略相比,该方法在累积故障修复时间和运行稳定性上明显更优。  相似文献   

11.
In this article we propose a synthesis of recent works concerning a qualitative approach, based on possibility theory, to multi-stage decision under uncertainty. Our framework is a qualitative possibilistic counterpart to Markov decision processes (MDP), for which we propose dynamic programming-like algorithms. The classical MDP algorithms and their possibilistic counterparts are then experimentally compared on a family of benchmark examples. Finally, we also explore the case of partial observability, thus providing qualitative counterparts to the partially observable Markov decision processes framework.  相似文献   

12.
Decision processes with incomplete state feedback have been traditionally modelled as partially observable Markov decision processes. In this article, we present an alternative formulation based on probabilistic regular languages. The proposed approach generalises the recently reported work on language measure theoretic optimal control for perfectly observable situations and shows that such a framework is far more computationally tractable to the classical alternative. In particular, we show that the infinite horizon decision problem under partial observation, modelled in the proposed framework, is λ-approximable and, in general, is not harder to solve compared to the fully observable case. The approach is illustrated via two simple examples.  相似文献   

13.
This paper proposes a simple analytical model called M time scale Markov decision process (MMDPs) for hierarchically structured sequential decision making processes, where decisions in each level in the M-level hierarchy are made in M different discrete time scales. In this model, the state-space and the control-space of each level in the hierarchy are nonoverlapping with those of the other levels, respectively, and the hierarchy is structured in a "pyramid" sense such that a decision made at level m (slower time scale) state and/or the state will affect the evolutionary decision making process of the lower level m+1 (faster time scale) until a new decision is made at the higher level but the lower level decisions themselves do not affect the transition dynamics of higher levels. The performance produced by the lower level decisions will affect the higher level decisions. A hierarchical objective function is defined such that the finite-horizon value of following a (nonstationary) policy at level m+1 over a decision epoch of level m plus an immediate reward at level m is the single-step reward for the decision making process at level m. From this we define "multi-level optimal value function" and derive "multi-level optimality equation." We discuss how to solve MMDPs exactly and study some approximation methods, along with heuristic sampling-based schemes, to solve MMDPs.  相似文献   

14.
Desirable properties of the infinite histories of a finite-state Markov decision process are specified in terms of a finite number of events represented as ω-regular sets. An infinite history of the process produces a reward which depends on the properties it satisfies. The authors investigate the existence of optimal policies and provide algorithms for the construction of such policies  相似文献   

15.
Opacity is a generic security property, that has been defined on (non-probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non-deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non-opaque becomes decidable for a restricted class of ω-regular secrets, as well as for all ω-regular secrets under finite-memory schedulers.  相似文献   

16.
17.
Continuous time Markov decision processes (CTMDPs) with a finite state and action space have been considered for a long time. It is known that under fairly general conditions the reward gained over a finite horizon can be maximized by a so-called piecewise constant policy which changes only finitely often in a finite interval. Although this result is available for more than 30 years, numerical analysis approaches to compute the optimal policy and reward are restricted to discretization methods which are known to converge to the true solution if the discretization step goes to zero. In this paper, we present a new method that is based on uniformization of the CTMDP and allows one to compute an ε-optimalε-optimal policy up to a predefined precision in a numerically stable way using adaptive time steps.  相似文献   

18.
19.
Weighted Markov decision processes (MDPs) have long been used to model quantitative aspects of systems in the presence of uncertainty. However, much of the literature on such MDPs takes a monolithic approach, by modelling a system as a particular MDP; properties of the system are then inferred by analysis of that particular MDP. In contrast in this paper we develop compositional methods for reasoning about weighted MDPs, as a possible basis for compositional reasoning about their quantitative behaviour. In particular we approach these systems from a process algebraic point of view. For these we define a coinductive simulation-based behavioural preorder which is compositional in the sense that it is preserved by structural operators for constructing weighted MDPs from components.  相似文献   

20.
Hierarchical algorithms for Markov decision processes have been proved to be useful for the problem domains with multiple subtasks. Although the existing hierarchical approaches are strong in task decomposition, they are weak in task abstraction, which is more important for task analysis and modeling. In this paper, we propose a task-oriented design to strengthen the task abstraction. Our approach learns an episodic task model from the problem domain, with which the planner obtains the same control effect, with concise structure and much improved performance than the original model. According to our analysis and experimental evaluation, our approach has better performance than the existing hierarchical algorithms, such as MAXQ and HEXQ.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号