Comparison of more than two diagnostic or screening tests for prediction of presence vs. absence of a disease or condition can be complicated when attempting to simultaneously optimize a pair of competing criteria such as sensitivity and specificity. A technique for quantifying relative superiority of a diagnostic test when a gold standard exists in this setting is described. The proposed {\it superiority index} is used to quantify and rank performance of diagnostic tests and combinations of tests. Development of a validated model containing a subset of the tests may be improved by eliminating tests having a very small value for this index. To illustrate, we present an example using a large battery of neuropsychological tests for prediction of cognitive impairment. Using the proposed index, the battery is reduced with favorable results.