摘要:AbstractWe propose a deep neural network architecture for storing approximate Lyapunov functions of systems of ordinary differential equations. Under a small-gain condition on the system, the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality.
关键词:Keywordsdeep neural networkLyapunov functionstabilitysmall-gain conditioncurse of dimensionality