首页    期刊浏览 2024年07月08日 星期一
登录注册

文章基本信息

  • 标题:Indoor Emergency Path Planning Based on the Q-Learning Optimization Algorithm
  • 本地全文:下载
  • 作者:Shenghua Xu ; Yang Gu ; Xiaoyan Li
  • 期刊名称:ISPRS International Journal of Geo-Information
  • 电子版ISSN:2220-9964
  • 出版年度:2022
  • 卷号:11
  • 期号:1
  • 页码:66
  • DOI:10.3390/ijgi11010066
  • 语种:English
  • 出版社:MDPI AG
  • 摘要:The internal structure of buildings is becoming increasingly complex. Providing a scientific and reasonable evacuation route for trapped persons in a complex indoor environment is important for reducing casualties and property losses. In emergency and disaster relief environments, indoor path planning has great uncertainty and higher safety requirements. Q-learning is a value-based reinforcement learning algorithm that can complete path planning tasks through autonomous learning without establishing mathematical models and environmental maps. Therefore, we propose an indoor emergency path planning method based on the Q-learning optimization algorithm. First, a grid environment model is established. The discount rate of the exploration factor is used to optimize the Q-learning algorithm, and the exploration factor in the ε-greedy strategy is dynamically adjusted before selecting random actions to accelerate the convergence of the Q-learning algorithm in a large-scale grid environment. An indoor emergency path planning experiment based on the Q-learning optimization algorithm was carried out using simulated data and real indoor environment data. The proposed Q-learning optimization algorithm basically converges after 500 iterative learning rounds, which is nearly 2000 rounds higher than the convergence rate of the Q-learning algorithm. The SASRA algorithm has no obvious convergence trend in 5000 iterations of learning. The results show that the proposed Q-learning optimization algorithm is superior to the SARSA algorithm and the classic Q-learning algorithm in terms of solving time and convergence speed when planning the shortest path in a grid environment. The convergence speed of the proposed Q- learning optimization algorithm is approximately five times faster than that of the classic Q- learning algorithm. The proposed Q-learning optimization algorithm in the grid environment can successfully plan the shortest path to avoid obstacle areas in a short time.
国家哲学社会科学文献中心版权所有