期刊名称:Bulletin of the Technical Committee on Data Engineering
出版年度:2019
卷号:42
期号:3
页码:13-23
出版社:IEEE Computer Society
摘要:An essential ingredient of successful machine-assisted decision-making, particularly in high-stakesdecisions, is interpretability –– allowing humans to understand, trust and, if necessary, contest, thecomputational process and its outcomes. These decision-making processes are typically complex: carriedout in multiple steps, employing models with many hidden assumptions, and relying on datasets that areoften used outside of the original context for which they were intended. In response, humans need to beable to determine the “fitness for use” of a given model or dataset, and to assess the methodology thatwas used to produce it.To address this need, we propose to develop interpretability and transparency tools based on theconcept of a nutritional label, drawing an analogy to the food industry, where simple, standard labelsconvey information about the ingredients and production processes. Nutritional labels are derivedautomatically or semi-automatically as part of the complex process that gave rise to the data or modelthey describe, embodying the paradigm of interpretability-by-design. In this paper we further motivatenutritional labels, describe our instantiation of this paradigm for algorithmic rankers, and give a visionfor developing nutritional labels that are appropriate for different contexts and stakeholders..