首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Algorithms based on upper confidence bounds for balancing exploration and exploitation are gaining popularity since they are easy to implement, efficient and effective. This paper considers a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms. In earlier experimental works, such algorithms were found to outperform the competing algorithms. We provide the first analysis of the expected regret for such algorithms. As expected, our results show that the algorithm that uses the variance estimates has a major advantage over its alternatives that do not use such estimates provided that the variances of the payoffs of the suboptimal arms are low. We also prove that the regret concentrates only at a polynomial rate. This holds for all the upper confidence bound based algorithms and for all bandit problems except those special ones where with probability one the payoff obtained by pulling the optimal arm is larger than the expected payoff for the second best arm. Hence, although upper confidence bound bandit algorithms achieve logarithmic expected regret rates, they might not be suitable for a risk-averse decision maker. We illustrate some of the results by computer simulations.  相似文献   

2.
图赌博机是一种重要的不确定性环境下的序列决策模型, 在社交网络、电子商务和推荐系统等领域都得到了广泛的应用. 目前, 针对图赌博机的工作都只关注如何快速识别最优摇臂从而最小化累积遗憾, 而忽略了在很多应用场景中存在的隐私保护问题. 为了克服现有图赌博机算法的缺陷, 提出了一种满足差分隐私的图赌博机算法GAP (图反馈下的差分隐私摇臂消除策略). 一方面, GAP算法阶段性地根据摇臂的经验平均奖赏更新摇臂选取策略, 并在计算摇臂的经验平均奖赏时引入拉普拉斯噪声, 从而确保恶意攻击者难以根据算法输出推算摇臂奖赏数据, 保护了隐私. 另一方面, GAP算法在每个阶段根据精心构造的反馈图的独立集探索摇臂集合, 有效地利用了图形式的反馈信息. 证明了GAP算法满足差分隐私性质, 具有与理论下界相匹配的遗憾界. 在仿真数据集上的实验结果表明: GAP算法在有效保护隐私的同时取得了与现有无隐私保护的图赌博机算法相当的累积遗憾.  相似文献   

3.
We obtain minimax lower bounds on the regret for the classical two-armed bandit problem. We provide a finite-sample minimax version of the well-known log n asymptotic lower bound of Lai and Robbins (1985). The finite-time lower bound allows us to derive conditions for the amount of time necessary to make any significant gain over a random guessing strategy. These bounds depend on the class of possible distributions of the rewards associated with the arms. For example, in contrast to the log n asymptotic results on the regret, we show that the minimax regret is achieved by mere random guessing under fairly mild conditions on the set of allowable configurations of the two arms. That is, we show that for every allocation rule and for every n, there is a configuration such that the regret at time n is at least 1-ϵ times the regret of random guessing, where ϵ is any small positive constant  相似文献   

4.
Finite-time Analysis of the Multiarmed Bandit Problem   总被引:1,自引:0,他引:1  
Auer  Peter  Cesa-Bianchi  Nicolò  Fischer  Paul 《Machine Learning》2002,47(2-3):235-256
Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.  相似文献   

5.
In the multiarmed bandit problem the dilemma between exploration and exploitation in reinforcement learning is expressed as a model of a gambler playing a slot machine with multiple arms. A policy chooses an arm in each round so as to minimize the number of times that arms with suboptimal expected rewards are pulled. We propose the minimum empirical divergence (MED) policy and derive an upper bound on the finite-time regret which meets the asymptotic bound for the case of finite support models. In a setting similar to ours, Burnetas and Katehakis have already proposed an asymptotically optimal policy. However, we do not assume any knowledge of the support except for its upper and lower bounds. Furthermore, the criterion for choosing an arm, minimum empirical divergence, can be computed easily by a convex optimization technique. We confirm by simulations that the MED policy demonstrates good performance in finite time in comparison to other currently popular policies.  相似文献   

6.

We present algorithms for solving multi-armed and linear-contextual bandit tasks in the face of adversarial corruptions in the arm responses. Traditional algorithms for solving these problems assume that nothing but mild, e.g., i.i.d. sub-Gaussian, noise disrupts an otherwise clean estimate of the utility of the arm. This assumption and the resulting approaches can fail catastrophically if there is an observant adversary that corrupts even a small fraction of the responses generated when arms are pulled. To rectify this, we propose algorithms that use recent advances in robust statistical estimation to perform arm selection in polynomial time. Our algorithms are easy to implement and vastly outperform several existing UCB and EXP-style algorithms for stochastic and adversarial multi-armed and linear-contextual bandit problems in wide variety of experimental settings. Our algorithms enjoy minimax-optimal regret bounds, as well as can tolerate an adversary that is allowed to corrupt upto a universally constant fraction of the arms pulled by the algorithm.

  相似文献   

7.
We propose a method that learns to allocate computation time to a given set of algorithms, of unknown performance, with the aim of solving a given sequence of problem instances in a minimum time. Analogous meta-learning techniques are typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. We adopt instead an online approach, named GAMBLETA, in which algorithm performance models are iteratively updated, and used to guide allocation on a sequence of problem instances. GAMBLETA is a general method for selecting among two or more alternative algorithm portfolios. Each portfolio has its own way of allocating computation time to the available algorithms, possibly based on performance models, in which case its performance is expected to improve over time, as more runtime data becomes available. The resulting exploration-exploitation trade-off is represented as a bandit problem. In our previous work, the algorithms corresponded to the arms of the bandit, and allocations evaluated by the different portfolios were mixed, using a solver for the bandit problem with expert advice, but this required the setting of an arbitrary bound on algorithm runtimes, invalidating the optimal regret of the solver. In this paper, we propose a simpler version of GAMBLETA, in which the allocators correspond to the arms, such that a single portfolio is selected for each instance. The selection is represented as a bandit problem with partial information, and an unknown bound on losses. We devise a solver for this game, proving a bound on its expected regret. We present experiments based on results from several solver competitions, in various domains, comparing GAMBLETA with another online method.  相似文献   

8.
In its most basic form, bandit theory is concerned with the design problem of sequentially choosing members from a given collection of random variables so that the regret, i.e., Rnj (μ*-μj)ETn(j), grows as slowly as possible with increasing n. Here μj is the expected value of the bandit arm (i.e., random variable) indexed by j, Tn(j) is the number of times arm j has been selected in the first n decision stages, and μ*=supj μj. The present paper contributes to the theory by considering the situation in which observations are dependent. To begin with, the dependency is presumed to depend only on past observations of the same arm, but later, we allow that it may be with respect to the entire past and that the set of arms is infinite. This brings queues and, more generally, controlled Markov processes into our purview. Thus our “black-box” methodology is suitable for the case when the only observables are cost values and, in particular, the probability structure and loss function are unknown to the designer. The conclusion of the analysis is that under lenient conditions, using algorithms prescribed herein, risk growth is commensurate with that in the simplest i.i.d. cases. Our methods represent an alternative to stochastic-approximation/perturbation-analysis ideas for tuning queues  相似文献   

9.
The two-armed bandit problem is a classical optimization problem where a decision maker sequentially pulls one of two arms attached to a gambling machine, with each pull resulting in a random reward. The reward distributions are unknown, and thus, one must balance between exploiting existing knowledge about the arms, and obtaining new information. Bandit problems are particularly fascinating because a large class of real world problems, including routing, Quality of Service (QoS) control, game playing, and resource allocation, can be solved in a decentralized manner when modeled as a system of interacting gambling machines. Although computationally intractable in many cases, Bayesian methods provide a standard for optimal decision making. This paper proposes a novel scheme for decentralized decision making based on the Goore Game in which each decision maker is inherently Bayesian in nature, yet avoids computational intractability by relying simply on updating the hyper parameters of sibling conjugate priors, and on random sampling from these posteriors. We further report theoretical results on the variance of the random rewards experienced by each individual decision maker. Based on these theoretical results, each decision maker is able to accelerate its own learning by taking advantage of the increasingly more reliable feedback that is obtained as exploration gradually turns into exploitation in bandit problem based learning. Extensive experiments, involving QoS control in simulated wireless sensor networks, demonstrate that the accelerated learning allows us to combine the benefits of conservative learning, which is high accuracy, with the benefits of hurried learning, which is fast convergence. In this manner, our scheme outperforms recently proposed Goore Game solution schemes, where one has to trade off accuracy with speed. As an additional benefit, performance also becomes more stable. We thus believe that our methodology opens avenues for improved performance in a number of applications of bandit based decentralized decision making.  相似文献   

10.
The K-armed bandit problem is a well-known formalization of the exploration versus exploitation dilemma. In this learning problem, a player is confronted to a gambling machine with K arms where each arm is associated to an unknown gain distribution. The goal of the player is to maximize the sum of the rewards. Several approaches have been proposed in literature to deal with the K-armed bandit problem. This paper introduces first the concept of “expected reward of greedy actions” which is based on the notion of probability of correct selection (PCS), well-known in simulation literature. This concept is then used in an original semi-uniform algorithm which relies on the dynamic programming framework and on estimation techniques to optimally balance exploration and exploitation. Experiments with a set of simulated and realistic bandit problems show that the new DP-greedy algorithm is competitive with state-of-the-art semi-uniform techniques.  相似文献   

11.
K. Hiraoka  S. Amari 《Algorithmica》1998,22(1-2):138-156
The bandit problem consists of two factors, one being exploration or the collection of information on the environment and the other being the exploitation or taking benefit by choosing the optimal action in the uncertain environment. It is desirable to choose only the optimal action for the exploitation, while the exploration or collection of information requires taking a variety of (nonoptimal) actions as trials. Hence, in order to obtain the maximal cumulative gain, we need to compromise between the exploration and exploitation processes. We treat a situation where our actions change the structure of the environment, of which a simple example is formulated as the lob—pass problem by Abe and Takeuchi. Usually, the environment is specified by a finite number of unknown parameters in the bandit problem, so that the information collection part is to estimate their true values. This paper treats a more realistic situation of nonparametric estimation of the environment structure which includes an infinite number (a functional degree) of unknown parameters. A strategy is given under such a circumstance, proving that the cumulative regret can be made of the order O(log t) , O((log t) 2 ) , or O(t 1-σ ) (0< σ <1) depending on the dynamics of the environment, where t is the number of trials, in contrast with the optimal order O(log t) in the parametric case. Received December 14, 1996; revised June 14, 1997, and July 24, 1997.  相似文献   

12.
Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.  相似文献   

13.
A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0,1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multi-armed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worst-case regret bounded by \(O\left(\sqrt{Kn\log(n)}\right)\). We seek to improve this using episode context, particularly in the case where K is large. Using a predictor that places weight M i ?>?0 on arm i with weights summing to 1, we present the PUCB algorithm which achieves regret \(O\left(\frac{1}{M_{\ast}}\sqrt{n\log(n)}\right)\) where M ??? is the weight on the optimal arm. We illustrate the behavior of PUCB with small simulation experiments, present extensions that provide additional capabilities for PUCB, and describe methods for obtaining suitable predictors for use with PUCB.  相似文献   

14.
在线核选择是在线核方法的重要工作,可分为过滤式、包裹式和嵌入式3种类型。已有在线核选择探索了包裹式方法和嵌入式方法,也经验地采用了过滤式方法,但迄今尚没有一个统一的框架来比较、分析并研究各种在线核选择问题。文中 提出一种在线核选择的多臂赌博机模型,该模型可作为一个统一框架,同时给出在线核选择的包裹式方法和嵌入式方法。给定候选核集合,候选集中的一个核对应多臂赌博机模型中的一个臂,在线核选择的每回合依据一个概率分布重复地随机选择多个核,并应用指数加权的方法来更新该概率分布。这样,在线核选择问题本质上可归约为一个非遗忘对手环境下的对抗式多臂赌博机问题,并可应用对抗式多臂赌博机模型统一地给出在线核选择的包裹式方法和嵌入式方法。文中进一步提出一个新的在线核选择后悔的概念,理论证明包裹式方法具有关于回合数亚线性的弱期望后悔界,并且嵌入式方法具有关于回合数亚线性的期望后悔界。最后,在标准数据集上通过实验验证了所提统一框架的可行性。  相似文献   

15.
In this paper we introduce the notion of approximate implementations for Probabilistic I/O Automata (PIOA) and develop methods for proving such relationships. We employ a task structure on the locally controlled actions and a task scheduler to resolve nondeterminism. The interaction between a scheduler and an automaton gives rise to a trace distribution—a probability distribution over the set of traces. We define a PIOA to be a (discounted) approximate implementation of another PIOA if the set of trace distributions produced by the first is close to that of the latter, where closeness is measured by the (resp. discounted) uniform metric over trace distributions. We propose simulation functions for proving approximate implementations corresponding to each of the above types of approximate implementation relations. Since our notion of similarity of traces is based on a metric on trace distributions, we do not require the state spaces nor the space of external actions of the automata to be metric spaces. We discuss applications of approximate implementations to verification of probabilistic safety and termination.  相似文献   

16.
We consider an optimization problem in which the cost of a feasible solution depends on a set of unknown parameters (scenario) that will be realized. In order to assess the cost of implementing a given solution, its performance is compared with the optimal one under each feasible scenario. The positive difference between the objective values of both solutions defines the regret corresponding to a fixed scenario. The proposed optimization model will seek for a compromise solution by minimizing the expected regret where the expectation is taken respect to a probability distribution that depends on the same solution that is being evaluated, which is called solution-dependent probability distribution. We study the optimization model obtained by applying a specific family of solution-dependent probability distributions to the shortest path problem where the unknown parameters are the arc lengths of the network. This approach can be used to generate new models for robust optimization where the degree of conservatism is calibrated by using different families of probability distributions for the unknown parameters.  相似文献   

17.
Investigates the multiarmed bandit problem, where each arm generates an infinite sequence of Bernoulli distributed rewards. The parameters of these Bernoulli distributions are unknown and initially assumed to be beta-distributed. Every time a bandit is selected, its beta-distribution is updated to new information in a Bayesian way. The objective is to maximize the long-term discounted rewards. We study the relationship between the necessity of acquiring additional information and the reward. This is done by considering two extreme situations, which occur when a bandit has been played N times: the situation where the decision maker stops learning and the situation where the decision maker acquires full information about that bandit. We show that the difference in reward between this lower and upper bound goes to zero as N grows large.  相似文献   

18.
This paper addresses the issue of computational resource allocation within the context of cooperative coevolution. Cooperative coevolution typically works by breaking a problem down into smaller subproblems (or components) and coevolving them in a round-robin fashion, resulting in a uniform resource allocation among its components. Despite its success on a wide range of problems, cooperative coevolution struggles to perform efficiently when its components do not contribute equally to the overall objective value. This is of crucial importance on large-scale optimization problems where such difference are further magnified. To resolve this imbalance problem, we extend the standard cooperative coevolution to a new generic framework capable of learning the contribution of each component using multi-armed bandit techniques. The new framework allocates the computational resources to each component proportional to their contributions towards improving the overall objective value. This approach results in a more economical use of the limited computational resources. We study different aspects of the proposed framework in the light of extensive experiments. Our empirical results confirm that even a simple bandit-based credit assignment scheme can significantly improve the performance of cooperative coevolution on large-scale continuous problems, leading to competitive performance as compared to the state-of-the-art algorithms.  相似文献   

19.
In this paper, we investigate algorithmic randomness on more general spaces than the Cantor space, namely computable metric spaces. To do this, we first develop a unified framework allowing computations with probability measures. We show that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computable and measure-theoretic sense. We show that any computable metric space admits a universal uniform randomness test (without further assumption).  相似文献   

20.
Recently, it has been shown that the regret of the Follow the Regularized Leader (FTRL) algorithm for online linear optimization can be bounded by the total variation of the cost vectors rather than the number of rounds. In this paper, we extend this result to general online convex optimization. In particular, this resolves an open problem that has been posed in a number of recent papers. We first analyze the limitations of the FTRL algorithm as proposed by Hazan and Kale (in Machine Learning 80(2–3), 165–188, 2010) when applied to online convex optimization, and extend the definition of variation to a gradual variation which is shown to be a lower bound of the total variation. We then present two novel algorithms that bound the regret by the gradual variation of cost functions. Unlike previous approaches that maintain a single sequence of solutions, the proposed algorithms maintain two sequences of solutions that make it possible to achieve a variation-based regret bound for online convex optimization. To establish the main results, we discuss a lower bound for FTRL that maintains only one sequence of solutions, and a necessary condition on smoothness of the cost functions for obtaining a gradual variation bound. We extend the main results three-fold: (i) we present a general method to obtain a gradual variation bound measured by general norm; (ii) we extend algorithms to a class of online non-smooth optimization with gradual variation bound; and (iii) we develop a deterministic algorithm for online bandit optimization in multipoint bandit setting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号