摘要:AbstractThis work presents the experimental assessment of a hybrid control scheme based on Deep Reinforcement Learning (DRL) for obstacle avoidance in robot manipulators. More precisely, relying on an equivalent Linear Parameter Varying (LPV) state-space representation of the system, two operative modes, one based on both joint positions and velocities, one only based on velocity inputs, are activated depending on the measurement of the distance between the robot and the obstacle. Therefore, when the obstacle is close to the robot, a switching mechanism is introduced to enable the DRL algorithm instead of the basic motion planner, thus giving rise to a self-configuring architecture to cope with objects randomly moving in the workspace. The experimental tests of the DRL based collision avoidance hybrid strategy are carried out on a physical EPSON VT6 robot manipulator with satisfactory results.