首页    期刊浏览 2024年11月30日 星期六
登录注册

文章基本信息

  • 标题:Profit Sharing Introducing the Judgement of Incomplete Perception
  • 本地全文:下载
  • 作者:Ken Saito ; Shiro Masuda
  • 期刊名称:人工知能学会論文誌
  • 印刷版ISSN:1346-0714
  • 电子版ISSN:1346-8030
  • 出版年度:2004
  • 卷号:19
  • 期号:5
  • 页码:379-388
  • DOI:10.1527/tjsai.19.379
  • 出版社:The Japanese Society for Artificial Intelligence
  • 摘要:To apply reinforcement learning to difficult classes such as real-environment learning, we need to use a method robust to perceptual aliasing problem. The exploitation-oriented methods such as Profit Sharing can deal with the perceptual aliasing problem to a certain extent. However, when the agent needs to select different actions at the same sensory input, the learning efficiency worsens. To overcome the problem, several state partition methods using history information of state-action pairs are proposed. These methods try to convert a POMDP environment into an MDP environment, and thus they are sometimes very useful. However, their computation cost is very high especially in large state spaces. In contrast, memory-less approaches try to escape from the aliased states by outputting actions stochastically. However, these methods output actions stochastically even in unaliased states, and thus the learning efficiency is bad. If we desire to guarantee the rationality in POMDPs, it is efficient to output actions stochastically only in the aliased states and to output one action deterministically in the other unaliased states. Hence, to discriminate between aliased states and unaliased states, the utilization of χ² -goodness-of-fit test is proposed by Miyazaki et al. They point out that, in aliased states, the distributions of the state transitions by random search and a particular policy are different. This difference doesn't occur owing to non-deterministic actions. Hence, if the agent can collect enough samples to implement the test, the agent can distinguish between aliased states and unaliased states well. However, such a test needs a large amount of data, and it's a problem how the agent collects samples without worsening learning efficiency. If the agent uses random search in the course of learning, the learning efficiency worsens especially in unaliased states. Therefore, in this research, we propose a new method called Extended On-line Profit Sharing with Judgement (EOPSwJ) to detect important incomplete perception, which doesn't need large computation cost and numerous samples. We use two criterions for detecting important incomplete perceptions to attain a task. One is the rate of transitions to each state, and the other is the deterministic rate of actions. We confirm the availability of EOPSwJ using two simulations.
  • 关键词:reinforcement learning ; POMDPs ; profit sharing (PS) ; extended on-line profit sharing (EOPS) ; extended on-line profit sharing with judgement (EOPSwJ)
国家哲学社会科学文献中心版权所有