摘要:Algorithmic tools for predicting violence and criminality are increasingly deployed in policing, bail, and sentencing. Scholarly attention to date has focused on these tools’ procedural due process implications. This Article considers their interaction with the enduring racial dimensions of the criminal justice system. I consider two alternative lenses for evaluating the racial effects of algorithmic criminal justice: constitutional doctrine and emerging technical standards of “algorithmic fairness.” I argue first that constitutional doctrine is poorly suited to the task. It often fails to capture the full spectrum of racial issues that can arise in the use of algorithmic tools in criminal justice. Emerging technical standards of algorithmic fairness are at least attentive to the specifics of the relevant technology. But the technical literature has failed to grapple with how, or whether, various technical conceptions of fairness track policy-significant consequences. Drawing on the technical literature, I propose a reformulated metric for considering racial equity concerns in algorithmic design: Rather than asking about abstract definitions of fairness, a criminal justice algorithm should be evaluated in terms of its long-term, dynamic effects on racial stratification. The metric of nondiscrimination for an algorithmically assigned form of state coercion should focus on the net burden thereby placed on a racial minority.