期刊名称:International Journal on Electrical Engineering and Informatics
印刷版ISSN:2085-6830
出版年度:2014
卷号:6
期号:4
DOI:10.15676/ijeei.2014.6.4.3
出版社:School of Electrical Engineering and Informatics
摘要:Most successes in accelerating RL incorporated internal knowledge or humanintervention into the learning system such as reward shaping, transfer learning,parameter tuning, and even heuristics. These approaches could be no longer solutions toRL acceleration when internal knowledge is not available. Since the learningconvergence is determined by the size of the state space where the larger the state spacethe slower learning might become, reducing the state space by eliminating theinsignificant ones can lead to faster learning. In this paper a novel algorithm calledOnline State Elimination in Accelerated Reinforcement Learning (OSE-ARL) isintroduced. This algorithm accelerates the RL learning performance by distinguishinginsignificant states from the significant one and then eliminating them from the statespace in early learning episodes. Applying OSE-ARL in grid world robot navigationshows 1.46 times faster in achieving learning convergence. This algorithm is generallyapplicable for other robotic task challenges or general robotics learning with large scalestate space.