首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 413 毫秒
1.
We consider average reward Markov decision processes with discrete time parameter and denumerable state space. We are concerned with the following problem: Find necessary and sufficient conditions so that, for arbitrary bounded reward function, the corresponding average reward optimality equation has a bounded solution. This problem is solved for a class of systems including the case in which, under the action of any stationary policy, the state space is an irreducible positive recurrent class.  相似文献   

2.
Decentralized partially observable Markov decision process (DEC-POMDP) is an approach to model multi-robot decision making problems under uncertainty. Since it is NEXP-complete there is no efficient exact algorithm to solve these problems and in spite of the attention it has taken recently, so far only a few approximate solutions that can solve small problems have been proposed. In this study, we offer a novel approximate solution algorithm for DEC-POMDP problems using evolution strategies, and a novel approach to approximately calculate the fitness of the chromosomes which correspond to the expected reward. We also propose a new problem which is a more complex, modified version of the grid meeting problem and solve it. Our results show that our algorithm is scalable and we can solve problems that have more states than the problems attempted in previous studies.  相似文献   

3.
In this paper we consider a completely ergodic Markov decision process with finite state and decision spaces using the average return per unit time criterion. An algorithm is derived which approximates the optimal solution. It will be shown that this algorithm is finite and supplies upper and lower bounds for the maximal average return and a nearly optimal policy with average return between these bounds.  相似文献   

4.
The adaptive critic heuristic has been a popular algorithm in reinforcement learning(RL) and approximate dynamic programming(ADP) alike.It is one of the first RL and ADP algorithms.RL and ADP algorithms are particularly useful for solving Markov decision processes(MDPs) that suffer from the curses of dimensionality and modeling.Many real-world problems,however,tend to be semi-Markov decision processes(SMDPs) in which the time spent in each transition of the underlying Markov chains is itself a random variable.Unfortunately for the average reward case,unlike the discounted reward case,the MDP does not have an easy extension to the SMDP.Examples of SMDPs can be found in the area of supply chain management,maintenance management,and airline revenue management.In this paper,we propose an adaptive critic heuristic for the SMDP under the long-run average reward criterion.We present the convergence analysis of the algorithm which shows that under certain mild conditions,which can be ensured within a simulator,the algorithm converges to an optimal solution with probability 1.We test the algorithm extensively on a problem of airline revenue management in which the manager has to set prices for airline tickets over the booking horizon.The problem has a large scale,suffering from the curse of dimensionality,and hence it is difficult to solve it via classical methods of dynamic programming.Our numerical results are encouraging and show that the algorithm outperforms an existing heuristic used widely in the airline industry.  相似文献   

5.
This paper studies mean maximization and variance minimization problems in finite horizon continuous-time Markov decision processes. The state and action spaces are assumed to be Borel spaces, while reward functions and transition rates are allowed to be unbounded. For the mean problem, we design a method called successive approximation, which enables us to prove the existence of a solution to the Hamilton-Jacobi-Bellman (HJB) equation, and then the existence of a mean-optimal policy under some growth and compact-continuity conditions. For the variance problem, using the first-jump analysis, we succeed in converting the second moment of the finite horizon reward to a mean of a finite horizon reward with new reward functions under suitable conditions, based on which the associated HJB equation for the variance problem and the existence of variance-optimal policies are established. Value iteration algorithms for computing mean- and variance-optimal policies are proposed.  相似文献   

6.
《Computer Networks》2002,38(5):613-630
In this work, we consider the problem of resource allocation in multi-class networks, where users specify the value they attach to obtaining different amounts of resource by means of a utility function. We develop a resource allocation scheme that maximizes the average aggregate utility per unit time. We formulate this resource allocation problem as a Markov decision process. We present numerical results that illustrate that our scheme performs better than the greedy resource allocation policy. We also discuss the implications of deliberate lying by users about their utility functions and develop a pricing scheme that prevents such lying.  相似文献   

7.
The optimization problems of Markov control processes (MCPs) with exact knowledge of system parameters, in the form of transition probabilities or infinitesimal transition rates, can be solved by using the concept of Markov performance potential which plays an important role in the sensitivity analysis of MCPs. In this paper, by using an equivalent infinitesimal generator, we first introduce a definition of discounted Poisson equations for semi-Markov control processes (SMCPs), which is similar to that for MCPs, and the performance potentials of SMCPs are defined as solution of the equation. Some related optimization techniques based on performance potentials for MCPs may be extended to the optimization of SMCPs if the system parameters are known with certainty. Unfortunately, exact values of the distributions of the sojourn times at some states or the transition probabilities of the embedded Markov chain for a large-scale SMCP are generally difficult or impossible to obtain, which leads to the uncertainty of the semi-Markov kernel, and thereby to the uncertainty of equivalent infinitesimal transition rates. Similar to the optimization of uncertain MCPs, a potential-based policy iteration method is proposed in this work to search for the optimal robust control policy for SMCPs with uncertain infinitesimal transition rates that are represented as compact sets. In addition, convergence of the algorithm is discussed.  相似文献   

8.
The Jump Linear Quadratic Gaussian (JLQG) model is well studied due to its wide applications. However, JLQG with controlled jump rates are rarely researched, while the existing studies usually impose an assumption that jump rates are independent and separately controlled. In practical systems, their jump rates may not be independent of each other. In this paper, we consider a continuous‐time JLQG model with dependently controlled jump rates and formulate it as a two‐level control problem. The low‐level problem is a standard JLQG problem, thus we focus on solution of high‐level problem. We propose a Markov decision process‐based approach to calculate performance gradient with respect to jump rates control variable and develop a gradient‐based optimization algorithm. We present an application of manufacturing system to illustrate the main results of this paper. Copyright © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

9.
The current study examines the dynamic vehicle allocation problems of the automated material handling system (AMHS) in semiconductor manufacturing. With the uncertainty involved in wafer lot movement, dynamically allocating vehicles to each intrabay is very difficult. The cycle time and overall tool productivity of the wafer lots are affected when a vehicle takes too long to arrive. In the current study, a Markov decision model is developed to study the vehicle allocation control problem in the AMHS. The objective is to minimize the sum of the expected long-run average transport job waiting cost. An interesting exhaustive structure in the optimal vehicle allocation control is found in accordance with the Markov decision model. Based on this exhaustive structure, an efficient algorithm is then developed to solve the vehicle allocation control problem numerically. The performance of the proposed method is verified by a simulation study. Compared with other methods, the proposed method can significantly reduce the waiting cost of wafer lots for AMHS vehicle transportation.  相似文献   

10.
This paper proposes a reinforcement learning-based lexicographic approach to the call admission control problem in communication networks. The admission control problem is modelled as a multi-constrained Markov decision process. To overcome the problems of the standard approaches to the solution of constrained Markov decision processes, based on the linear programming formulation or on a Lagrangian approach, a multi-constraint lexicographic approach is defined, and an online implementation based on reinforcement learning techniques is proposed. Simulations validate the proposed approach.  相似文献   

11.
We compare the computational performance of linear programming (LP) and the policy iteration algorithm (PIA) for solving discrete-time infinite-horizon Markov decision process (MDP) models with total expected discounted reward. We use randomly generated test problems as well as a real-life health-care problem to empirically show that, unlike previously reported, barrier methods for LP provide a viable tool for optimally solving such MDPs. The dimensions of comparison include transition probability matrix structure, state and action size, and the LP solution method.  相似文献   

12.
We consider Markov decision processes (MDPs) where the state transition probability distributions are not uniquely known, but are known to belong to some intervals-so called "controlled Markov set-chains"-with infinite-horizon discounted reward criteria. We present formal methods to improve multiple policies for solving such controlled Markov set-chains. Our multipolicy improvement methods follow the spirit of parallel rollout and policy switching for solving MDPs. In particular, these methods are useful for online control of Markov set-chains and for designing policy iteration (PI) type algorithms. We develop a PI-type algorithm and prove that it converges to an optimal policy  相似文献   

13.
We generalize and build on the PAC Learning framework for Markov Decision Processes developed in Jain and Varaiya (2006). We consider the reward function to depend on both the state and the action. Both the state and action spaces can potentially be countably infinite. We obtain an estimate for the value function of a Markov decision process, which assigns to each policy its expected discounted reward. This expected reward can be estimated as the empirical average of the reward over many independent simulation runs. We derive bounds on the number of runs needed for the convergence of the empirical average to the expected reward uniformly for a class of policies, in terms of the V-C or pseudo dimension of the policy class. We then propose a framework to obtain an ?-optimal policy from simulation. We provide sample complexity of such an approach.  相似文献   

14.
本文基于马尔科夫决策过程提出一种燃料电池汽车最优等效氢燃料消耗控制策略.控制策略以部分观测量为基础,以马尔科夫转移概率矩阵为条件,采用基于蒙特卡洛马尔科夫(MCMC)算法的Metropolis-Hastings采样方法,获得平均奖励输出,进而通过最优氢燃料消耗代价函数的优化以控制在氢燃料电池系统和动力电池系统间进行能量分配.该策略避免了目前燃料电池汽车控制策略过度依赖未来需求功率的预测以及预测模型的准确性.在建立燃料电池汽车动力模型,燃料电池系统和动力电池系统模型的基础上,进行了包含自学习系统、基于MH采样的平均奖励过滤系统以及控制选择输出系统的控制策略设计.通过仿真和实验结果表明基于马尔科夫决策控制策略的有效性.  相似文献   

15.
In this paper, we address the problem of suboptimal behavior during online partially observable Markov decision process (POMDP) planning caused by time constraints on planning. Taking inspiration from the related field of reinforcement learning (RL), our solution is to shape the agent’s reward function in order to lead the agent to large future rewards without having to spend as much time explicitly estimating cumulative future rewards, enabling the agent to save time to improve the breadth planning and build higher quality plans. Specifically, we extend potential-based reward shaping (PBRS) from RL to online POMDP planning. In our extension, information about belief states is added to the function optimized by the agent during planning. This information provides hints of where the agent might find high future rewards beyond its planning horizon, and thus achieve greater cumulative rewards. We develop novel potential functions measuring information useful to agent metareasoning in POMDPs (reflecting on agent knowledge and/or histories of experience with the environment), theoretically prove several important properties and benefits of using PBRS for online POMDP planning, and empirically demonstrate these results in a range of classic benchmark POMDP planning problems.  相似文献   

16.
We consider the problem of transmitting packets over a randomly varying point to point channel with the objective of minimizing the expected power consumption subject to a constraint on the average packet delay. By casting it as a constrained Markov decision process in discrete time with time-averaged costs, we prove structural results about the dependence of the optimal policy on buffer occupancy, number of packet arrivals in the previous slot and the channel fading state for both i.i.d. and Markov arrivals and channel fading. The techniques we use to establish such results: convexity, stochastic dominance, decreasing-differences, are among the standard ones for the purpose. Our main contribution, however, is the passage to the average cost case, a notoriously difficult problem for which rather limited results are available. The novel proof techniques used here are likely to have utility in other stochastic control problems well beyond their immediate application considered here.   相似文献   

17.
This paper proposes a simple analytical model called M time scale Markov decision process (MMDPs) for hierarchically structured sequential decision making processes, where decisions in each level in the M-level hierarchy are made in M different discrete time scales. In this model, the state-space and the control-space of each level in the hierarchy are nonoverlapping with those of the other levels, respectively, and the hierarchy is structured in a "pyramid" sense such that a decision made at level m (slower time scale) state and/or the state will affect the evolutionary decision making process of the lower level m+1 (faster time scale) until a new decision is made at the higher level but the lower level decisions themselves do not affect the transition dynamics of higher levels. The performance produced by the lower level decisions will affect the higher level decisions. A hierarchical objective function is defined such that the finite-horizon value of following a (nonstationary) policy at level m+1 over a decision epoch of level m plus an immediate reward at level m is the single-step reward for the decision making process at level m. From this we define "multi-level optimal value function" and derive "multi-level optimality equation." We discuss how to solve MMDPs exactly and study some approximation methods, along with heuristic sampling-based schemes, to solve MMDPs.  相似文献   

18.
This paper investigates the strictly dissipative stabilization problem for multiple‐memory Markov jump systems with network communication protocol. Firstly, for reducing data transmission, we put forward a novel mode‐dependent event‐triggered communication scheme based on aperiodically sampled data. Secondly, a Markov jump system with general transition rates is considered to make the result more applicable, where the transition rates of some jumping modes allow to be completely known, or partially known, or even completely unknown. Thirdly, a less restrictive Lyapunov‐Krasovskii functional, which is only required to be positive definite at end points of each subinterval of the holding intervals, is first introduced for event‐triggered control issue. Based on the above methods, a sufficient condition with less conservatism is obtained to ensure the stochastic stability and dissipativity of the resulting closed‐loop system. Meanwhile, an explicit design method of the desired controller is achieved. Finally, two numerical examples are presented to demonstrate the effectiveness and advantage of the proposed method.  相似文献   

19.
运动规划算法作为自动驾驶系统中的重要研究内容,愈发受到研究者们关注.然而目前多数算法仅考虑在确定性结构化环境中的应用,忽视动态交通环境中潜在的不确定性因素.文中面向不确定性环境,将运动规划算法总结为两类:部分可观测马尔可夫决策过程(POMDP)和概率占用栅格图(POGM),从理论基础、求解算法、实际应用三方面进行介绍.基于当前置信状态,POMDP计算使未来折扣奖励最大的策略.POGM使用概率表征对应栅格上的占用情况,衡量车流动态变化的可能性,良好表征不确定性情况.最后,总结不确定性环境中当前运动规划问题面临的主要挑战和未来可能的研究方向.  相似文献   

20.
Fuzzy predictive control integrates conventional model‐based predictive control with techniques from fuzzy multicriteria decision making. The information regarding the fuzzy criteria of the control problem is combined by using a decision function from the fuzzy set‐theory. The use of fuzzy criteria in the cost function leads usually to a non‐convex optimization problem, which is numerically complex. The numeric optimization problem becomes more tractable by discretizing the control actions, limiting the search of the optimal solution to this space. This paper extends the application of the branch‐and‐bound optimization technique to predictive control problems with fuzzy cost functions. This approach can reduce significantly the search time, allowing the application of fuzzy predictive control to a broader class of systems, and to real‐time control problems. Simulation and real‐time results of temperature control in a fan‐coil unit show the applicability of the approach. © 2000 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号