摘要:Priority queues with parallel access are an attractive data structure for applications like prioritized online scheduling, discrete event simulation, or greedy algorithms. However, a classical priority queue constitutes a severe bottleneck in this context, leading to very small throughput. Hence, there has been significant interest in concurrent priority queues with relaxed semantics. We investigate the complementary quality criteria rank error (how close are deleted elements to the global minimum) and delay (for each element x, how many elements with lower priority are deleted before x). In this paper, we introduce MultiQueues as a natural approach to relaxed priority queues based on multiple sequential priority queues. Their naturally high theoretical scalability is further enhanced by using three orthogonal ways of batching operations on the sequential queues. Experiments indicate that MultiQueues present a very good performance-quality tradeoff and considerably outperform competing approaches in at least one of these aspects.We employ a seemingly paradoxical technique of "wait-free locking" that might be of more general interest to convert sequential data structures to relaxed concurrent data structures.
关键词:concurrent data structure;priority queues;randomized algorithms;wait-free locking