首页    期刊浏览 2024年10月06日 星期日
登录注册

文章基本信息

  • 标题:Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control
  • 本地全文:下载
  • 作者:Ari Arapostathis
  • 期刊名称:Stochastic Systems
  • 印刷版ISSN:1946-5238
  • 出版年度:2019
  • 卷号:9
  • 期号:3
  • 页码:292-294
  • DOI:10.1287/stsy.2019.0040
  • 语种:English
  • 出版社:Institute for Operations Research and the Management Sciences (INFORMS), Applied Probability Society
  • 摘要:The relative value iteration scheme (RVI) for Markov decision processes (MDP) dates back to White (1963), a seminal work, which introduced an algorithm for solving the ergodic dynamic programming equation for the finite state, finite action case. Its ramifications have given rise to popular learning algorithms (Q-learning). More recently, this algorithm gained prominence because of its implications for model predictive control (MPC). For stochastic control problems on an infinite time horizon, especially for problems that seek to optimize the average performance (ergodic control), obtaining the optimal policy in explicit form is only possible for a few classes of well-structured models. What is often used in practice is a heuristic method called the rolling horizon, or receding horizon, or MPC. This works as follows: one solves the finite horizon problem for a given number of steps N, or for an interval [0,T] in the case of a continuous time problem. The result is a nonstationary Markov policy, which is optimal for the finite horizon problem. We fix the initial action (this is the action determined at the Nth step of the value iteration (VI) algorithm) and apply it as a stationary Markov control. We refer to this Markov control as the rolling horizon control. This of course depends on the length of the horizon N. One expects that for well-structured problems, if N is sufficiently large, then the rolling horizon control is near optimal. Of course, this is a heuristic. The rolling horizon control might not even be stable. For a good discussion on this problem, we refer the reader to Della Vecchia et al. (2012). Obtaining such solutions is further complicated by the fact that the value of the ergodic cost required in the successive iteration scheme is not known. This is the reason for the RVI.
  • 关键词:stochastic networks;control;optimization
国家哲学社会科学文献中心版权所有