摘要:Many fundamental problems in shared-memory distributed computing, including mutual exclusion [James E. Burns and Nancy A. Lynch, 1993], consensus [Leqi Zhu, 2016], and implementations of many sequential objects [Prasad Jayanti et al., 2000], are known to require linear space in the worst case. However, these lower bounds all work by constructing particular executions for any given algorithm that may be both very long and very improbable. The significance of these bounds is justified by an assumption that any space that is used in some execution must be allocated for all executions. This assumption is not consistent with the storage allocation mechanisms of actual practical systems.
We consider the consequences of adopting a per-execution approach to space complexity, where an object only counts toward the space complexity of an execution if it is used in that execution. This allows us to show that many known randomized algorithms for fundamental problems in shared-memory distributed computing have expected space complexity much lower than the worst-case lower bounds, and that many algorithms that are adaptive in time complexity can also be made adaptive in space complexity. For the specific problem of mutual exclusion, we develop a new algorithm that illustrates an apparent trade-off between low expected space complexity and low expected RMR complexity. Whether this trade-off is necessary is an open problem.
For some applications, it may be helpful to pay only for objects that are updated, as opposed to those that are merely read. We give a data structure that requires no space to represent objects that are not updated at the cost of a small overhead on those that are.