摘要:AbstractIndustry 4.0 envisions adaptable and resilient manufacturing and logistics operations capable of handling dynamic changes or deviations in operations using intelligent sensing and computation technologies. Recent advances in artificial intelligence and collaborative robotics have created unprecedented opportunities to fully automate a variety of industrial tasks such as material handling, assembly, machine tending, and inspection, among other. With the rapidly growing interest in the vision of lot-size-of-one, a fundamental and challenging question remains open: How can robots leverage the knowledge of previously-learned tasks to expedite their learning on new tasks? We tackle this problem by developing and testing a novel deep reinforcement learning framework with task modularization to enhance adaptability of collaborative robots in performing a multitude of simulated tasks. The framework is built upon the actor-critic method and the notion of task modularity, and is compared against the Soft Actor-Critic (SAC) algorithm as a baseline. Numerical experiments on the Meta-World dataset prove the ability of the proposed framework in improving the adaptability and efficiency of collaborative robots to new tasks through task modularization and transfer of policies from previously-learned task modules.