首页    期刊浏览 2025年07月17日 星期四
登录注册

文章基本信息

  • 标题:Imitation of Real Lane-Change Decisions Using Reinforcement Learning
  • 本地全文:下载
  • 作者:Lu Zhao ; Nadir Farhi ; Zoi Christoforou
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2021
  • 卷号:54
  • 期号:2
  • 页码:203-209
  • DOI:10.1016/j.ifacol.2021.06.023
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractMicroscopic modeling of human driving consists generally in combining both car-following and lane-change models. While the human car-following process has been extensively developed and well modeled, the lane-change behavior is more complex to understand and still remains to be explored. Classical lane-change models are usually rule-based and handcrafted, that tend to exhibit limited performance. Machine Learning algorithms, particularly Reinforcement Learning (RL) ones, provide an alternative approach and have recently achieved high success in modeling difficult decision-making processes in many fields. We propose in this article a reinforcement learning based model for the human lane-change behavior, with an online calibration of real lane-change decisions, extracted from the NGSIM data-set. In addition, we use the traffic vehicular simulator SUMO ("Simulation of Urban Mobility") to create a numerical simulation environment. The utilization of numerical traffic simulation allows us enriching the data-set, for training the agent to find an optimal policy for lane change. Thus, about 13% additional traffic situations, not present in the real data, are created by the traffic simulation environment. The trained agent is collision-free and human-like who is satisfactory to real data and also to the additional simulated data. Moreover, our RL model can perform up to 95.37% of the real decisions observed in the data-set.
  • 关键词:KeywordsTraffic ModelsArtificial intelligence in transportationlane-change modelreinforcement learninghuman driving behavior
国家哲学社会科学文献中心版权所有