首页    期刊浏览 2024年11月15日 星期五
登录注册

文章基本信息

  • 标题:Finite Markov Decision Process Frame Work Algorithm
  • 本地全文:下载
  • 作者:P.Sushma ; Yogesh Kumar Sharma ; S.Naga Prasad
  • 期刊名称:International Journal of Computer Science and Information Technologies
  • 电子版ISSN:0975-9646
  • 出版年度:2021
  • 卷号:12
  • 期号:2
  • 页码:56-59
  • 语种:English
  • 出版社:TechScience Publications
  • 摘要:In machine learning, the environment is formulated as a Markov decision process (MDP), as many reinforcement learning algorithms for this context utilize dynamic programming techniques. The supervised learning and unsupervised learning, the paradigm of reinforcement learning deals with learning in sequential decision making problems in which there is limited feedback. RL is a general class of algorithms in the field of machine learning that aims at allowing an agent to learn how to behave in an environment, where the only feedback consists of a scalar reward signal. This text introduces the intuitions and concepts behind Markov decision processes and two classes of algorithms for computing optimal behaviors: reinforcement learning and dynamic programming. First the formal framework of Markov decision process is defined, accompanied by the definition of value functions and policies. The main part of this text deals with introducing foundational classes of algorithms for learning optimal behaviors, based on several of optimality with respect to the goal of learning sequential decisions. Additionally, it surveys efficient extensions of the foundational algorithms, differing mainly in the way feedback given by the environment is used to speed up learning, and in the way they concentrate on relevant parts of the problem. For both model-based and modelfree settings these efficient extensions have shown useful in scaling up to larger problems.
国家哲学社会科学文献中心版权所有