摘要:In recent years, mapless navigation using deep reinforcement learning algorithms has shown significant advantages in improving robot motion planning capabilities. However, the majority of past works have focused on aerial and ground robotics, with very little attention being paid to unmanned surface vehicle (USV) navigation and ultimate deployment on real platforms. In response, this paper proposes a mapless navigation method based on deep reinforcement learning for USVs. Specifically, we carefully design the observation space, action space, reward function, and neural network for a navigation policy that allows the USV to reach the destination collision-free when equipped with only local sensors. Aiming at the sim-to-real transfer and slow convergence of deep reinforcement learning, this paper proposes a dynamics-free training and consistency strategy and designs domain randomization and adaptive curriculum learning. The method was evaluated using a range of tests applied to simulated and physical environments and was proven to work effectively in a real navigation environment.