摘要:AbstractMotivated by chance constrained optimisation problems that arise in stochastic model predictive control we investigate the connections between compression learning and scenario based optimisation. We discuss how compression learning provides powerful insight into a fundamental property that ensures optimal solutions to optimisation problems formulated using a finite number of realisations of the uncertainty will also be feasible for other, unseen instances of the uncertainty. This property, known as -consistency-, roughly translates to the requirement that a fixed cardinality subset of the scenarios used to generate the optimal solution are enough to encode all the information needed to reconstruct the solution; all remaining scenarios are in a sense redundant. Computationally the catch of course is it is impossible to know a-priori which of the scenarios will be essential and which not. Moreover, the -unnecessary- scenarios are not wasted even in theory: Their presence is what provides the confidence level with which we can make the statement that the solution is feasible for unseen uncertainty instances. We demonstrate this connection through chance constrained optimisation programs based on a combination of scenarios and robust optimisation.