Comparing the effectiveness of interventions is now a requirement for regulatory approval in several countries. It also aids in clinical and public health decision-making. However, in the absence of head-to-head randomized trials (RCTs), determining the relative effectiveness of interventions is challenging. Several methodological options are now available. We aimed to determine the comparative validity of the adjusted indirect comparisons of RCTs with the mixed treatment comparison approach.
Using systematic searching, we identified all meta-analyses evaluating more than 3 interventions for a similar disease state with binary outcomes. We abstracted data on each clinical trial including population n and outcomes. We conducted fixed effects meta-analysis of each intervention versus mutual comparator and then applied the adjusted indirect comparison. We conducted a mixed treatment meta-analysis on all trials and compared the point estimates and 95% confidence/credible intervals (CIs/CrIs) to determine important differences.
We included data from 7 reviews that met our inclusion criteria, allowing a total of 51 comparisons. According to the a priori consistency rule, we found 2 examples where the analytic comparisons were statistically significant using the mixed treatment comparison over the adjusted indirect comparisons and 1 example where this was vice versa. We found 6 examples where the direction of effect differed according to the indirect comparison method chosen and we found 9 examples where the confidence intervals were importantly different between approaches.
In most analyses, the adjusted indirect comparison yields estimates of relative effectiveness equal to the mixed treatment comparison. In less complex indirect comparisons, where all studies share a mutual comparator, both approaches yield similar benefits. As comparisons become more complex, the mixed treatment comparison may be favoured.