摘要:AbstractIn this paper we consider a network of processors aiming at cooperatively solving linear programming problems subject to uncertainty. Each node only knows a common cost function and its local uncertain constraint set. We propose a randomized, distributed algorithm working under time-varying, asynchronous and directed communication topology. The algorithm is based on a local computation and communication paradigm. At each communication round, nodes perform two updates: (i) a verification in which they check—in a randomized setup—the robust feasibility (and hence optimality) of the candidate optimal point, and (ii) an optimization step in which they exchange their candidate bases (minimal sets of active constraints) with neighbors and locally solve an optimization problem whose constraint set includes: a sampled constraint violating the candidate optimal point (if it exists), agent’s current basis and the collection of neighbor’s basis. As main result, we show that if a processor successfully performs the verification step for a sufficient number of communication rounds, it can stop the algorithm since a consensus has been reached. The common solution is—with high confidence—feasible (and hence optimal) for the entire set of uncertainty except a subset having arbitrary small probability measure. We show the effectiveness of the proposed distributed algorithm on a multi-core platform in which the nodes communicate asynchronously.
关键词:KeywordsDistributed OptimizationRandomized AlgorithmsRobust Linear ProgrammingOptimizationcontrol of large-scale network systemsLarge scale optimization problems