摘要:AbstractModern control systems routinely employ wireless networks to exchange information between a large number of plants, actuators and sensors. While wireless networks are defined by random, rapidly changing conditions that challenge common control design assumptions, properly allocating communication resources helps to maintain operation reliable. Designing resource allocation policies is usually challenging and requires explicit knowledge of the system and communication dynamics, but recent works have successfully explored deep reinforcement learning techniques to find optimal model-free resource allocation policies. Deep reinforcement learning algorithms do not necessarily scale well, however, which limits the immediate generalization of those approaches to large-scale wireless control systems. In this paper we discuss the use of reinforcement learning and graph neural networks (GNNs) to design model-free, scalable resource allocation policies. On the one hand, GNNs generalize the spatial-temporal convolutions present in convolutional neural networks (CNNs) to data defined over arbitrary graphs. In doing so, GNNs manage to exploit local regular structure encoded in graphs to reduce the dimensionality of the learning space. The architecture of the wireless network, on the other, defines an underlying communication graph that can be used as basis for a GNN model. Numerical experiments show the learned policies outperform baseline resource allocation solutions.
关键词:KeywordsResource AllocationControl over networksGraph Neural NetworksReinforcement Learning ControlNeural Networks