出版社:Institute for Operations Research and the Management Sciences (INFORMS), Applied Probability Society
摘要:We consider the model of a token-based joint autoscaling and load-balancing strategy, proposed in a recent paper by Mukherjee et al. [Mukherjee D, Dhara S, Borst SC, Van Leeuwaarden JSH (2017) Optimal service elasticity in large-scale distributed systems. Proc. ACM Measurement Anal. Comput. Systems 1(1):25:1–25:28.], which offers an efficient scalable implementation and yet achieves asymptotically optimal steady-state delay performance and energy consumption as the number of servers N → ∞. In the aforementioned work, the asymptotic results are obtained under the assumption that the queues have fixed-size finite buffers, and therefore, the fundamental question of stability of the proposed scheme with infinite buffers was left open. In this paper, we address this fundamental stability question. The system stability under the usual subcritical load assumption is not automatic. Moreover, the stability may not even hold for all N. The key challenge stems from the fact that the process lacks monotonicity, which has been the powerful primary tool for establishing stability in load-balancing models. We develop a novel method to prove that the subcritically loaded system is stable for large enough N and establish convergence of steady-state distributions to the optimal one as N → ∞. The method goes beyond the state-of-the-art techniques; it uses an induction-based idea and a “weak monotonicity” property of the model. This technique is of independent interest and may have broader applicability.